Test Report: QEMU_macOS 19364

                    
                      663d17776bbce0b1e831c154f8973876d77c5fd1:2024-08-03:35636
                    
                

Test fail (97/282)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 14.16
7 TestDownloadOnly/v1.20.0/kubectl 0
31 TestOffline 10
55 TestCertOptions 10.1
56 TestCertExpiration 195.44
57 TestDockerFlags 10.2
58 TestForceSystemdFlag 10.18
59 TestForceSystemdEnv 10.65
104 TestFunctional/parallel/ServiceCmdConnect 29.69
176 TestMultiControlPlane/serial/StopSecondaryNode 214.12
177 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 104.09
178 TestMultiControlPlane/serial/RestartSecondaryNode 208.51
180 TestMultiControlPlane/serial/RestartClusterKeepsNodes 234.39
181 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
182 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 1.38
183 TestMultiControlPlane/serial/StopCluster 202.09
184 TestMultiControlPlane/serial/RestartCluster 5.25
185 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.08
186 TestMultiControlPlane/serial/AddSecondaryNode 0.07
190 TestImageBuild/serial/Setup 9.91
193 TestJSONOutput/start/Command 9.76
199 TestJSONOutput/pause/Command 0.08
205 TestJSONOutput/unpause/Command 0.04
222 TestMinikubeProfile 10.18
225 TestMountStart/serial/StartWithMountFirst 9.94
228 TestMultiNode/serial/FreshStart2Nodes 9.91
229 TestMultiNode/serial/DeployApp2Nodes 75.81
230 TestMultiNode/serial/PingHostFrom2Pods 0.09
231 TestMultiNode/serial/AddNode 0.07
232 TestMultiNode/serial/MultiNodeLabels 0.06
233 TestMultiNode/serial/ProfileList 0.07
234 TestMultiNode/serial/CopyFile 0.06
235 TestMultiNode/serial/StopNode 0.13
236 TestMultiNode/serial/StartAfterStop 57.12
237 TestMultiNode/serial/RestartKeepsNodes 9.19
238 TestMultiNode/serial/DeleteNode 0.1
239 TestMultiNode/serial/StopMultiNode 3.28
240 TestMultiNode/serial/RestartMultiNode 5.25
241 TestMultiNode/serial/ValidateNameConflict 20.25
245 TestPreload 10.05
247 TestScheduledStopUnix 10.44
248 TestSkaffold 12.21
251 TestRunningBinaryUpgrade 595.56
253 TestKubernetesUpgrade 18.01
266 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.76
267 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.19
269 TestStoppedBinaryUpgrade/Upgrade 573.8
271 TestPause/serial/Start 10.01
281 TestNoKubernetes/serial/StartWithK8s 9.93
282 TestNoKubernetes/serial/StartWithStopK8s 5.29
283 TestNoKubernetes/serial/Start 5.32
287 TestNoKubernetes/serial/StartNoArgs 5.3
289 TestNetworkPlugins/group/auto/Start 9.96
290 TestNetworkPlugins/group/calico/Start 9.79
291 TestNetworkPlugins/group/custom-flannel/Start 9.8
292 TestNetworkPlugins/group/false/Start 9.9
293 TestNetworkPlugins/group/kindnet/Start 10.03
294 TestNetworkPlugins/group/flannel/Start 9.91
295 TestNetworkPlugins/group/enable-default-cni/Start 10.01
296 TestNetworkPlugins/group/bridge/Start 9.86
297 TestNetworkPlugins/group/kubenet/Start 9.83
300 TestStartStop/group/old-k8s-version/serial/FirstStart 9.86
301 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
302 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
305 TestStartStop/group/old-k8s-version/serial/SecondStart 5.24
306 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
307 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
308 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
309 TestStartStop/group/old-k8s-version/serial/Pause 0.1
311 TestStartStop/group/embed-certs/serial/FirstStart 9.81
312 TestStartStop/group/embed-certs/serial/DeployApp 0.09
313 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.11
317 TestStartStop/group/no-preload/serial/FirstStart 10.74
318 TestStartStop/group/embed-certs/serial/SecondStart 5.3
319 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
320 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
321 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
322 TestStartStop/group/embed-certs/serial/Pause 0.1
324 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.99
325 TestStartStop/group/no-preload/serial/DeployApp 0.09
326 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.11
329 TestStartStop/group/no-preload/serial/SecondStart 5.25
330 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
331 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
333 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
334 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
335 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
336 TestStartStop/group/no-preload/serial/Pause 0.1
338 TestStartStop/group/newest-cni/serial/FirstStart 9.97
340 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 6.73
341 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.04
342 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
343 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.08
344 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
349 TestStartStop/group/newest-cni/serial/SecondStart 5.25
352 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
353 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (14.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-977000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-977000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (14.15440225s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d80ba71e-fba0-4d5f-b972-50bd18d1830a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-977000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6a57a934-441a-43cc-bccc-b9f285857a5e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19364"}}
	{"specversion":"1.0","id":"54a4560c-ed5e-4783-a5f9-e9a0ef14c410","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig"}}
	{"specversion":"1.0","id":"fa992042-1a8b-4a48-9253-7236ec3dfe43","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"e021bc00-7c85-4b72-b117-f25485a60214","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"54a13ae7-fc80-4890-b844-9299bf745171","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube"}}
	{"specversion":"1.0","id":"03495752-bbc4-4ad1-a34b-e72caff8467e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"9fcdf6aa-15cb-4c68-9157-c4767e3c1a34","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"b3392d41-1f20-4a8b-ab51-590ee04eabba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"9d5e7136-acaa-411a-babb-e9128add1b32","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"ccb1cad7-f5c7-4ef5-8d93-9d78fe7ffec2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-977000\" primary control-plane node in \"download-only-977000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"c7a88c3c-694a-4877-a8cf-c9431ddd084e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"199ffc4d-7f68-4d56-9adb-0649906eff43","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19364-1166/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108e85aa0 0x108e85aa0 0x108e85aa0 0x108e85aa0 0x108e85aa0 0x108e85aa0 0x108e85aa0] Decompressors:map[bz2:0x14000592b58 gz:0x14000592c50 tar:0x14000592b90 tar.bz2:0x14000592c10 tar.gz:0x14000592c20 tar.xz:0x14000592c30 tar.zst:0x14000592c40 tbz2:0x14000592c10 tgz:0x14
000592c20 txz:0x14000592c30 tzst:0x14000592c40 xz:0x14000592c58 zip:0x14000592c60 zst:0x14000592cb0] Getters:map[file:0x14000a04770 http:0x14000840500 https:0x14000840550] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"ad820ee3-b84f-45ee-b15d-efed5bbb6b15","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 17:19:54.943460    1675 out.go:291] Setting OutFile to fd 1 ...
	I0803 17:19:54.943602    1675 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 17:19:54.943606    1675 out.go:304] Setting ErrFile to fd 2...
	I0803 17:19:54.943608    1675 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 17:19:54.943724    1675 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1166/.minikube/bin
	W0803 17:19:54.943807    1675 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19364-1166/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19364-1166/.minikube/config/config.json: no such file or directory
	I0803 17:19:54.945025    1675 out.go:298] Setting JSON to true
	I0803 17:19:54.963777    1675 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1158,"bootTime":1722729636,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 17:19:54.963882    1675 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 17:19:54.969580    1675 out.go:97] [download-only-977000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0803 17:19:54.969694    1675 notify.go:220] Checking for updates...
	W0803 17:19:54.969806    1675 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball: no such file or directory
	I0803 17:19:54.972496    1675 out.go:169] MINIKUBE_LOCATION=19364
	I0803 17:19:54.975558    1675 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	I0803 17:19:54.979593    1675 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 17:19:54.982624    1675 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 17:19:54.985578    1675 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	W0803 17:19:54.991478    1675 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0803 17:19:54.991723    1675 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 17:19:54.997474    1675 out.go:97] Using the qemu2 driver based on user configuration
	I0803 17:19:54.997494    1675 start.go:297] selected driver: qemu2
	I0803 17:19:54.997498    1675 start.go:901] validating driver "qemu2" against <nil>
	I0803 17:19:54.997568    1675 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0803 17:19:55.001626    1675 out.go:169] Automatically selected the socket_vmnet network
	I0803 17:19:55.007307    1675 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0803 17:19:55.007426    1675 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0803 17:19:55.007442    1675 cni.go:84] Creating CNI manager for ""
	I0803 17:19:55.007458    1675 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0803 17:19:55.007509    1675 start.go:340] cluster config:
	{Name:download-only-977000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-977000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 17:19:55.013443    1675 iso.go:125] acquiring lock: {Name:mk5fb439c48468a4ea1254631a9a137d32075713 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 17:19:55.016621    1675 out.go:97] Downloading VM boot image ...
	I0803 17:19:55.016634    1675 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso
	I0803 17:20:01.197960    1675 out.go:97] Starting "download-only-977000" primary control-plane node in "download-only-977000" cluster
	I0803 17:20:01.197983    1675 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0803 17:20:01.252941    1675 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0803 17:20:01.252949    1675 cache.go:56] Caching tarball of preloaded images
	I0803 17:20:01.253089    1675 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0803 17:20:01.257431    1675 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0803 17:20:01.257438    1675 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0803 17:20:01.332320    1675 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0803 17:20:07.880470    1675 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0803 17:20:07.880657    1675 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0803 17:20:08.576422    1675 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0803 17:20:08.576624    1675 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/download-only-977000/config.json ...
	I0803 17:20:08.576646    1675 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/download-only-977000/config.json: {Name:mk74275890f984c00a097c3b7fd89b40f4ead095 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 17:20:08.576901    1675 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0803 17:20:08.577102    1675 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0803 17:20:09.024744    1675 out.go:169] 
	W0803 17:20:09.029714    1675 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19364-1166/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108e85aa0 0x108e85aa0 0x108e85aa0 0x108e85aa0 0x108e85aa0 0x108e85aa0 0x108e85aa0] Decompressors:map[bz2:0x14000592b58 gz:0x14000592c50 tar:0x14000592b90 tar.bz2:0x14000592c10 tar.gz:0x14000592c20 tar.xz:0x14000592c30 tar.zst:0x14000592c40 tbz2:0x14000592c10 tgz:0x14000592c20 txz:0x14000592c30 tzst:0x14000592c40 xz:0x14000592c58 zip:0x14000592c60 zst:0x14000592cb0] Getters:map[file:0x14000a04770 http:0x14000840500 https:0x14000840550] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0803 17:20:09.029740    1675 out_reason.go:110] 
	W0803 17:20:09.038705    1675 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 17:20:09.041713    1675 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-977000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (14.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19364-1166/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-018000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-018000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.853299083s)

                                                
                                                
-- stdout --
	* [offline-docker-018000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-018000" primary control-plane node in "offline-docker-018000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-018000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 17:58:36.084165    3884 out.go:291] Setting OutFile to fd 1 ...
	I0803 17:58:36.084304    3884 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 17:58:36.084310    3884 out.go:304] Setting ErrFile to fd 2...
	I0803 17:58:36.084312    3884 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 17:58:36.084459    3884 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1166/.minikube/bin
	I0803 17:58:36.085636    3884 out.go:298] Setting JSON to false
	I0803 17:58:36.103483    3884 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3480,"bootTime":1722729636,"procs":452,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 17:58:36.103558    3884 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 17:58:36.109058    3884 out.go:177] * [offline-docker-018000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0803 17:58:36.116931    3884 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 17:58:36.116950    3884 notify.go:220] Checking for updates...
	I0803 17:58:36.122861    3884 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	I0803 17:58:36.125924    3884 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 17:58:36.128898    3884 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 17:58:36.132936    3884 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	I0803 17:58:36.135940    3884 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 17:58:36.139291    3884 config.go:182] Loaded profile config "multinode-483000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 17:58:36.139349    3884 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 17:58:36.142835    3884 out.go:177] * Using the qemu2 driver based on user configuration
	I0803 17:58:36.149886    3884 start.go:297] selected driver: qemu2
	I0803 17:58:36.149897    3884 start.go:901] validating driver "qemu2" against <nil>
	I0803 17:58:36.149908    3884 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 17:58:36.151765    3884 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0803 17:58:36.154797    3884 out.go:177] * Automatically selected the socket_vmnet network
	I0803 17:58:36.157978    3884 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0803 17:58:36.158010    3884 cni.go:84] Creating CNI manager for ""
	I0803 17:58:36.158015    3884 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0803 17:58:36.158021    3884 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0803 17:58:36.158058    3884 start.go:340] cluster config:
	{Name:offline-docker-018000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-018000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 17:58:36.161974    3884 iso.go:125] acquiring lock: {Name:mk5fb439c48468a4ea1254631a9a137d32075713 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 17:58:36.166856    3884 out.go:177] * Starting "offline-docker-018000" primary control-plane node in "offline-docker-018000" cluster
	I0803 17:58:36.170873    3884 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0803 17:58:36.170898    3884 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0803 17:58:36.170907    3884 cache.go:56] Caching tarball of preloaded images
	I0803 17:58:36.170968    3884 preload.go:172] Found /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0803 17:58:36.170973    3884 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0803 17:58:36.171040    3884 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/offline-docker-018000/config.json ...
	I0803 17:58:36.171050    3884 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/offline-docker-018000/config.json: {Name:mkbcde383c497764d74a458d67333640ae871a1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 17:58:36.171298    3884 start.go:360] acquireMachinesLock for offline-docker-018000: {Name:mk8118ba0191912f8ddd157ebc629e1eb9051406 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 17:58:36.171331    3884 start.go:364] duration metric: took 26µs to acquireMachinesLock for "offline-docker-018000"
	I0803 17:58:36.171347    3884 start.go:93] Provisioning new machine with config: &{Name:offline-docker-018000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-018000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 17:58:36.171398    3884 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 17:58:36.179905    3884 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0803 17:58:36.195782    3884 start.go:159] libmachine.API.Create for "offline-docker-018000" (driver="qemu2")
	I0803 17:58:36.195808    3884 client.go:168] LocalClient.Create starting
	I0803 17:58:36.195881    3884 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/ca.pem
	I0803 17:58:36.195921    3884 main.go:141] libmachine: Decoding PEM data...
	I0803 17:58:36.195932    3884 main.go:141] libmachine: Parsing certificate...
	I0803 17:58:36.195976    3884 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/cert.pem
	I0803 17:58:36.196000    3884 main.go:141] libmachine: Decoding PEM data...
	I0803 17:58:36.196008    3884 main.go:141] libmachine: Parsing certificate...
	I0803 17:58:36.196450    3884 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1166/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 17:58:36.353591    3884 main.go:141] libmachine: Creating SSH key...
	I0803 17:58:36.400495    3884 main.go:141] libmachine: Creating Disk image...
	I0803 17:58:36.400513    3884 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 17:58:36.400696    3884 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/offline-docker-018000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/offline-docker-018000/disk.qcow2
	I0803 17:58:36.423697    3884 main.go:141] libmachine: STDOUT: 
	I0803 17:58:36.423726    3884 main.go:141] libmachine: STDERR: 
	I0803 17:58:36.423796    3884 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/offline-docker-018000/disk.qcow2 +20000M
	I0803 17:58:36.432279    3884 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 17:58:36.432297    3884 main.go:141] libmachine: STDERR: 
	I0803 17:58:36.432325    3884 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/offline-docker-018000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/offline-docker-018000/disk.qcow2
	I0803 17:58:36.432329    3884 main.go:141] libmachine: Starting QEMU VM...
	I0803 17:58:36.432345    3884 qemu.go:418] Using hvf for hardware acceleration
	I0803 17:58:36.432371    3884 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/offline-docker-018000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/offline-docker-018000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/offline-docker-018000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:00:d5:61:fd:03 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/offline-docker-018000/disk.qcow2
	I0803 17:58:36.434219    3884 main.go:141] libmachine: STDOUT: 
	I0803 17:58:36.434244    3884 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 17:58:36.434268    3884 client.go:171] duration metric: took 238.462875ms to LocalClient.Create
	I0803 17:58:38.436339    3884 start.go:128] duration metric: took 2.264996625s to createHost
	I0803 17:58:38.436364    3884 start.go:83] releasing machines lock for "offline-docker-018000", held for 2.265100041s
	W0803 17:58:38.436400    3884 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 17:58:38.448229    3884 out.go:177] * Deleting "offline-docker-018000" in qemu2 ...
	W0803 17:58:38.461887    3884 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 17:58:38.461898    3884 start.go:729] Will try again in 5 seconds ...
	I0803 17:58:43.463943    3884 start.go:360] acquireMachinesLock for offline-docker-018000: {Name:mk8118ba0191912f8ddd157ebc629e1eb9051406 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 17:58:43.464326    3884 start.go:364] duration metric: took 290.792µs to acquireMachinesLock for "offline-docker-018000"
	I0803 17:58:43.464443    3884 start.go:93] Provisioning new machine with config: &{Name:offline-docker-018000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-018000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 17:58:43.464661    3884 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 17:58:43.473716    3884 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0803 17:58:43.518370    3884 start.go:159] libmachine.API.Create for "offline-docker-018000" (driver="qemu2")
	I0803 17:58:43.518421    3884 client.go:168] LocalClient.Create starting
	I0803 17:58:43.518536    3884 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/ca.pem
	I0803 17:58:43.518599    3884 main.go:141] libmachine: Decoding PEM data...
	I0803 17:58:43.518615    3884 main.go:141] libmachine: Parsing certificate...
	I0803 17:58:43.518670    3884 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/cert.pem
	I0803 17:58:43.518713    3884 main.go:141] libmachine: Decoding PEM data...
	I0803 17:58:43.518725    3884 main.go:141] libmachine: Parsing certificate...
	I0803 17:58:43.519225    3884 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1166/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 17:58:43.725069    3884 main.go:141] libmachine: Creating SSH key...
	I0803 17:58:43.839523    3884 main.go:141] libmachine: Creating Disk image...
	I0803 17:58:43.839532    3884 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 17:58:43.839739    3884 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/offline-docker-018000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/offline-docker-018000/disk.qcow2
	I0803 17:58:43.848832    3884 main.go:141] libmachine: STDOUT: 
	I0803 17:58:43.848849    3884 main.go:141] libmachine: STDERR: 
	I0803 17:58:43.848892    3884 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/offline-docker-018000/disk.qcow2 +20000M
	I0803 17:58:43.856642    3884 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 17:58:43.856670    3884 main.go:141] libmachine: STDERR: 
	I0803 17:58:43.856683    3884 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/offline-docker-018000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/offline-docker-018000/disk.qcow2
	I0803 17:58:43.856688    3884 main.go:141] libmachine: Starting QEMU VM...
	I0803 17:58:43.856704    3884 qemu.go:418] Using hvf for hardware acceleration
	I0803 17:58:43.856744    3884 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/offline-docker-018000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/offline-docker-018000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/offline-docker-018000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:5c:67:b8:23:04 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/offline-docker-018000/disk.qcow2
	I0803 17:58:43.858288    3884 main.go:141] libmachine: STDOUT: 
	I0803 17:58:43.858305    3884 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 17:58:43.858317    3884 client.go:171] duration metric: took 339.901166ms to LocalClient.Create
	I0803 17:58:45.860429    3884 start.go:128] duration metric: took 2.395813875s to createHost
	I0803 17:58:45.860496    3884 start.go:83] releasing machines lock for "offline-docker-018000", held for 2.396224042s
	W0803 17:58:45.860893    3884 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-018000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-018000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 17:58:45.876483    3884 out.go:177] 
	W0803 17:58:45.881710    3884 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 17:58:45.881768    3884 out.go:239] * 
	* 
	W0803 17:58:45.884575    3884 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 17:58:45.894476    3884 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-018000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-08-03 17:58:45.90962 -0700 PDT m=+2331.149345293
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-018000 -n offline-docker-018000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-018000 -n offline-docker-018000: exit status 7 (65.229125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-018000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-018000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-018000
--- FAIL: TestOffline (10.00s)

                                                
                                    
x
+
TestCertOptions (10.1s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-356000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-356000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.841170333s)

                                                
                                                
-- stdout --
	* [cert-options-356000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-356000" primary control-plane node in "cert-options-356000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-356000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-356000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-356000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-356000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-356000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (81.529459ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-356000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-356000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-356000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-356000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-356000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-356000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (40.395208ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-356000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-356000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-356000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-356000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-356000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-08-03 17:59:16.910217 -0700 PDT m=+2362.150912001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-356000 -n cert-options-356000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-356000 -n cert-options-356000: exit status 7 (29.167084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-356000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-356000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-356000
--- FAIL: TestCertOptions (10.10s)

                                                
                                    
x
+
TestCertExpiration (195.44s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-170000 --memory=2048 --cert-expiration=3m --driver=qemu2 
E0803 17:59:05.729682    1673 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/addons-989000/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-170000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (10.115990334s)

                                                
                                                
-- stdout --
	* [cert-expiration-170000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-170000" primary control-plane node in "cert-expiration-170000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-170000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-170000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-170000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-170000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-170000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.182856875s)

                                                
                                                
-- stdout --
	* [cert-expiration-170000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-170000" primary control-plane node in "cert-expiration-170000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-170000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-170000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-170000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-170000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-170000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-170000" primary control-plane node in "cert-expiration-170000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-170000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-170000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-170000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-08-03 18:02:17.094069 -0700 PDT m=+2542.327830376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-170000 -n cert-expiration-170000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-170000 -n cert-expiration-170000: exit status 7 (63.267958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-170000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-170000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-170000
--- FAIL: TestCertExpiration (195.44s)

                                                
                                    
x
+
TestDockerFlags (10.2s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-144000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-144000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.969321125s)

                                                
                                                
-- stdout --
	* [docker-flags-144000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-144000" primary control-plane node in "docker-flags-144000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-144000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 17:58:56.737312    4078 out.go:291] Setting OutFile to fd 1 ...
	I0803 17:58:56.737433    4078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 17:58:56.737437    4078 out.go:304] Setting ErrFile to fd 2...
	I0803 17:58:56.737439    4078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 17:58:56.737559    4078 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1166/.minikube/bin
	I0803 17:58:56.738625    4078 out.go:298] Setting JSON to false
	I0803 17:58:56.754618    4078 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3500,"bootTime":1722729636,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 17:58:56.754684    4078 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 17:58:56.761445    4078 out.go:177] * [docker-flags-144000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0803 17:58:56.769160    4078 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 17:58:56.769219    4078 notify.go:220] Checking for updates...
	I0803 17:58:56.776318    4078 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	I0803 17:58:56.777741    4078 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 17:58:56.781272    4078 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 17:58:56.784289    4078 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	I0803 17:58:56.787296    4078 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 17:58:56.790733    4078 config.go:182] Loaded profile config "force-systemd-flag-711000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 17:58:56.790803    4078 config.go:182] Loaded profile config "multinode-483000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 17:58:56.790854    4078 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 17:58:56.798337    4078 out.go:177] * Using the qemu2 driver based on user configuration
	I0803 17:58:56.803232    4078 start.go:297] selected driver: qemu2
	I0803 17:58:56.803238    4078 start.go:901] validating driver "qemu2" against <nil>
	I0803 17:58:56.803246    4078 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 17:58:56.805566    4078 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0803 17:58:56.808256    4078 out.go:177] * Automatically selected the socket_vmnet network
	I0803 17:58:56.811382    4078 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0803 17:58:56.811426    4078 cni.go:84] Creating CNI manager for ""
	I0803 17:58:56.811434    4078 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0803 17:58:56.811442    4078 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0803 17:58:56.811478    4078 start.go:340] cluster config:
	{Name:docker-flags-144000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-144000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 17:58:56.815344    4078 iso.go:125] acquiring lock: {Name:mk5fb439c48468a4ea1254631a9a137d32075713 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 17:58:56.823332    4078 out.go:177] * Starting "docker-flags-144000" primary control-plane node in "docker-flags-144000" cluster
	I0803 17:58:56.827280    4078 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0803 17:58:56.827297    4078 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0803 17:58:56.827312    4078 cache.go:56] Caching tarball of preloaded images
	I0803 17:58:56.827383    4078 preload.go:172] Found /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0803 17:58:56.827389    4078 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0803 17:58:56.827465    4078 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/docker-flags-144000/config.json ...
	I0803 17:58:56.827483    4078 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/docker-flags-144000/config.json: {Name:mkbc8e4eacc30c2658aac0f2557edf132ea89e5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 17:58:56.827818    4078 start.go:360] acquireMachinesLock for docker-flags-144000: {Name:mk8118ba0191912f8ddd157ebc629e1eb9051406 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 17:58:56.827857    4078 start.go:364] duration metric: took 31.375µs to acquireMachinesLock for "docker-flags-144000"
	I0803 17:58:56.827869    4078 start.go:93] Provisioning new machine with config: &{Name:docker-flags-144000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-144000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 17:58:56.827896    4078 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 17:58:56.834239    4078 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0803 17:58:56.852407    4078 start.go:159] libmachine.API.Create for "docker-flags-144000" (driver="qemu2")
	I0803 17:58:56.852438    4078 client.go:168] LocalClient.Create starting
	I0803 17:58:56.852500    4078 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/ca.pem
	I0803 17:58:56.852537    4078 main.go:141] libmachine: Decoding PEM data...
	I0803 17:58:56.852548    4078 main.go:141] libmachine: Parsing certificate...
	I0803 17:58:56.852591    4078 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/cert.pem
	I0803 17:58:56.852618    4078 main.go:141] libmachine: Decoding PEM data...
	I0803 17:58:56.852627    4078 main.go:141] libmachine: Parsing certificate...
	I0803 17:58:56.853107    4078 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1166/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 17:58:57.013421    4078 main.go:141] libmachine: Creating SSH key...
	I0803 17:58:57.077373    4078 main.go:141] libmachine: Creating Disk image...
	I0803 17:58:57.077379    4078 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 17:58:57.077567    4078 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/docker-flags-144000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/docker-flags-144000/disk.qcow2
	I0803 17:58:57.086967    4078 main.go:141] libmachine: STDOUT: 
	I0803 17:58:57.086993    4078 main.go:141] libmachine: STDERR: 
	I0803 17:58:57.087048    4078 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/docker-flags-144000/disk.qcow2 +20000M
	I0803 17:58:57.094935    4078 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 17:58:57.094948    4078 main.go:141] libmachine: STDERR: 
	I0803 17:58:57.094964    4078 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/docker-flags-144000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/docker-flags-144000/disk.qcow2
	I0803 17:58:57.094967    4078 main.go:141] libmachine: Starting QEMU VM...
	I0803 17:58:57.094980    4078 qemu.go:418] Using hvf for hardware acceleration
	I0803 17:58:57.095013    4078 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/docker-flags-144000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/docker-flags-144000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/docker-flags-144000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:f2:b2:51:35:e7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/docker-flags-144000/disk.qcow2
	I0803 17:58:57.096630    4078 main.go:141] libmachine: STDOUT: 
	I0803 17:58:57.096646    4078 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 17:58:57.096665    4078 client.go:171] duration metric: took 244.230166ms to LocalClient.Create
	I0803 17:58:59.098780    4078 start.go:128] duration metric: took 2.270933125s to createHost
	I0803 17:58:59.098837    4078 start.go:83] releasing machines lock for "docker-flags-144000", held for 2.271040667s
	W0803 17:58:59.098948    4078 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 17:58:59.109975    4078 out.go:177] * Deleting "docker-flags-144000" in qemu2 ...
	W0803 17:58:59.149263    4078 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 17:58:59.149296    4078 start.go:729] Will try again in 5 seconds ...
	I0803 17:59:04.151422    4078 start.go:360] acquireMachinesLock for docker-flags-144000: {Name:mk8118ba0191912f8ddd157ebc629e1eb9051406 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 17:59:04.244213    4078 start.go:364] duration metric: took 92.66975ms to acquireMachinesLock for "docker-flags-144000"
	I0803 17:59:04.244389    4078 start.go:93] Provisioning new machine with config: &{Name:docker-flags-144000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-144000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 17:59:04.244732    4078 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 17:59:04.253164    4078 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0803 17:59:04.302272    4078 start.go:159] libmachine.API.Create for "docker-flags-144000" (driver="qemu2")
	I0803 17:59:04.302316    4078 client.go:168] LocalClient.Create starting
	I0803 17:59:04.302446    4078 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/ca.pem
	I0803 17:59:04.302513    4078 main.go:141] libmachine: Decoding PEM data...
	I0803 17:59:04.302532    4078 main.go:141] libmachine: Parsing certificate...
	I0803 17:59:04.302592    4078 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/cert.pem
	I0803 17:59:04.302649    4078 main.go:141] libmachine: Decoding PEM data...
	I0803 17:59:04.302662    4078 main.go:141] libmachine: Parsing certificate...
	I0803 17:59:04.303233    4078 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1166/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 17:59:04.469313    4078 main.go:141] libmachine: Creating SSH key...
	I0803 17:59:04.603793    4078 main.go:141] libmachine: Creating Disk image...
	I0803 17:59:04.603799    4078 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 17:59:04.604019    4078 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/docker-flags-144000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/docker-flags-144000/disk.qcow2
	I0803 17:59:04.613611    4078 main.go:141] libmachine: STDOUT: 
	I0803 17:59:04.613629    4078 main.go:141] libmachine: STDERR: 
	I0803 17:59:04.613680    4078 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/docker-flags-144000/disk.qcow2 +20000M
	I0803 17:59:04.621499    4078 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 17:59:04.621512    4078 main.go:141] libmachine: STDERR: 
	I0803 17:59:04.621522    4078 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/docker-flags-144000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/docker-flags-144000/disk.qcow2
	I0803 17:59:04.621525    4078 main.go:141] libmachine: Starting QEMU VM...
	I0803 17:59:04.621534    4078 qemu.go:418] Using hvf for hardware acceleration
	I0803 17:59:04.621564    4078 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/docker-flags-144000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/docker-flags-144000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/docker-flags-144000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:ec:f2:5b:59:8a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/docker-flags-144000/disk.qcow2
	I0803 17:59:04.623259    4078 main.go:141] libmachine: STDOUT: 
	I0803 17:59:04.623273    4078 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 17:59:04.623287    4078 client.go:171] duration metric: took 320.972708ms to LocalClient.Create
	I0803 17:59:06.625400    4078 start.go:128] duration metric: took 2.380712958s to createHost
	I0803 17:59:06.625449    4078 start.go:83] releasing machines lock for "docker-flags-144000", held for 2.381242666s
	W0803 17:59:06.625777    4078 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-144000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-144000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 17:59:06.639380    4078 out.go:177] 
	W0803 17:59:06.651706    4078 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 17:59:06.651751    4078 out.go:239] * 
	* 
	W0803 17:59:06.654076    4078 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 17:59:06.665387    4078 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-144000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-144000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-144000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (81.896708ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-144000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-144000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-144000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-144000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-144000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-144000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-144000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-144000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-144000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (45.599709ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-144000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-144000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-144000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-144000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-144000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-144000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-08-03 17:59:06.809331 -0700 PDT m=+2352.049710084
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-144000 -n docker-flags-144000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-144000 -n docker-flags-144000: exit status 7 (28.547208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-144000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-144000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-144000
--- FAIL: TestDockerFlags (10.20s)

                                                
                                    
x
+
TestForceSystemdFlag (10.18s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-711000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-711000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.986981s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-711000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-711000" primary control-plane node in "force-systemd-flag-711000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-711000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 17:58:51.632327    4056 out.go:291] Setting OutFile to fd 1 ...
	I0803 17:58:51.632452    4056 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 17:58:51.632455    4056 out.go:304] Setting ErrFile to fd 2...
	I0803 17:58:51.632458    4056 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 17:58:51.632598    4056 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1166/.minikube/bin
	I0803 17:58:51.633678    4056 out.go:298] Setting JSON to false
	I0803 17:58:51.649606    4056 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3495,"bootTime":1722729636,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 17:58:51.649670    4056 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 17:58:51.655611    4056 out.go:177] * [force-systemd-flag-711000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0803 17:58:51.662592    4056 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 17:58:51.662645    4056 notify.go:220] Checking for updates...
	I0803 17:58:51.670613    4056 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	I0803 17:58:51.674612    4056 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 17:58:51.677635    4056 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 17:58:51.680665    4056 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	I0803 17:58:51.683619    4056 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 17:58:51.686902    4056 config.go:182] Loaded profile config "force-systemd-env-336000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 17:58:51.686990    4056 config.go:182] Loaded profile config "multinode-483000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 17:58:51.687044    4056 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 17:58:51.691612    4056 out.go:177] * Using the qemu2 driver based on user configuration
	I0803 17:58:51.698613    4056 start.go:297] selected driver: qemu2
	I0803 17:58:51.698621    4056 start.go:901] validating driver "qemu2" against <nil>
	I0803 17:58:51.698630    4056 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 17:58:51.700940    4056 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0803 17:58:51.704605    4056 out.go:177] * Automatically selected the socket_vmnet network
	I0803 17:58:51.707647    4056 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0803 17:58:51.707659    4056 cni.go:84] Creating CNI manager for ""
	I0803 17:58:51.707665    4056 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0803 17:58:51.707669    4056 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0803 17:58:51.707703    4056 start.go:340] cluster config:
	{Name:force-systemd-flag-711000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-711000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 17:58:51.711317    4056 iso.go:125] acquiring lock: {Name:mk5fb439c48468a4ea1254631a9a137d32075713 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 17:58:51.718601    4056 out.go:177] * Starting "force-systemd-flag-711000" primary control-plane node in "force-systemd-flag-711000" cluster
	I0803 17:58:51.722571    4056 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0803 17:58:51.722589    4056 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0803 17:58:51.722605    4056 cache.go:56] Caching tarball of preloaded images
	I0803 17:58:51.722673    4056 preload.go:172] Found /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0803 17:58:51.722680    4056 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0803 17:58:51.722745    4056 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/force-systemd-flag-711000/config.json ...
	I0803 17:58:51.722757    4056 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/force-systemd-flag-711000/config.json: {Name:mkdc18ce699057ee8dbcc56d7bf8b2e715046437 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 17:58:51.722988    4056 start.go:360] acquireMachinesLock for force-systemd-flag-711000: {Name:mk8118ba0191912f8ddd157ebc629e1eb9051406 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 17:58:51.723026    4056 start.go:364] duration metric: took 30.541µs to acquireMachinesLock for "force-systemd-flag-711000"
	I0803 17:58:51.723038    4056 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-711000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-711000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 17:58:51.723065    4056 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 17:58:51.730596    4056 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0803 17:58:51.749055    4056 start.go:159] libmachine.API.Create for "force-systemd-flag-711000" (driver="qemu2")
	I0803 17:58:51.749084    4056 client.go:168] LocalClient.Create starting
	I0803 17:58:51.749152    4056 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/ca.pem
	I0803 17:58:51.749188    4056 main.go:141] libmachine: Decoding PEM data...
	I0803 17:58:51.749204    4056 main.go:141] libmachine: Parsing certificate...
	I0803 17:58:51.749243    4056 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/cert.pem
	I0803 17:58:51.749267    4056 main.go:141] libmachine: Decoding PEM data...
	I0803 17:58:51.749277    4056 main.go:141] libmachine: Parsing certificate...
	I0803 17:58:51.749653    4056 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1166/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 17:58:51.905622    4056 main.go:141] libmachine: Creating SSH key...
	I0803 17:58:52.065286    4056 main.go:141] libmachine: Creating Disk image...
	I0803 17:58:52.065294    4056 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 17:58:52.065509    4056 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/force-systemd-flag-711000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/force-systemd-flag-711000/disk.qcow2
	I0803 17:58:52.074964    4056 main.go:141] libmachine: STDOUT: 
	I0803 17:58:52.074988    4056 main.go:141] libmachine: STDERR: 
	I0803 17:58:52.075050    4056 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/force-systemd-flag-711000/disk.qcow2 +20000M
	I0803 17:58:52.082888    4056 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 17:58:52.082903    4056 main.go:141] libmachine: STDERR: 
	I0803 17:58:52.082918    4056 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/force-systemd-flag-711000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/force-systemd-flag-711000/disk.qcow2
	I0803 17:58:52.082926    4056 main.go:141] libmachine: Starting QEMU VM...
	I0803 17:58:52.082942    4056 qemu.go:418] Using hvf for hardware acceleration
	I0803 17:58:52.082971    4056 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/force-systemd-flag-711000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/force-systemd-flag-711000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/force-systemd-flag-711000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:0b:50:df:e4:c6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/force-systemd-flag-711000/disk.qcow2
	I0803 17:58:52.084592    4056 main.go:141] libmachine: STDOUT: 
	I0803 17:58:52.084613    4056 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 17:58:52.084632    4056 client.go:171] duration metric: took 335.553459ms to LocalClient.Create
	I0803 17:58:54.086808    4056 start.go:128] duration metric: took 2.363783875s to createHost
	I0803 17:58:54.086890    4056 start.go:83] releasing machines lock for "force-systemd-flag-711000", held for 2.363928s
	W0803 17:58:54.087027    4056 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 17:58:54.108215    4056 out.go:177] * Deleting "force-systemd-flag-711000" in qemu2 ...
	W0803 17:58:54.130895    4056 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 17:58:54.130920    4056 start.go:729] Will try again in 5 seconds ...
	I0803 17:58:59.132925    4056 start.go:360] acquireMachinesLock for force-systemd-flag-711000: {Name:mk8118ba0191912f8ddd157ebc629e1eb9051406 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 17:58:59.133276    4056 start.go:364] duration metric: took 230.25µs to acquireMachinesLock for "force-systemd-flag-711000"
	I0803 17:58:59.133346    4056 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-711000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-711000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 17:58:59.133514    4056 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 17:58:59.142926    4056 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0803 17:58:59.181850    4056 start.go:159] libmachine.API.Create for "force-systemd-flag-711000" (driver="qemu2")
	I0803 17:58:59.181896    4056 client.go:168] LocalClient.Create starting
	I0803 17:58:59.182049    4056 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/ca.pem
	I0803 17:58:59.182106    4056 main.go:141] libmachine: Decoding PEM data...
	I0803 17:58:59.182119    4056 main.go:141] libmachine: Parsing certificate...
	I0803 17:58:59.182179    4056 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/cert.pem
	I0803 17:58:59.182217    4056 main.go:141] libmachine: Decoding PEM data...
	I0803 17:58:59.182230    4056 main.go:141] libmachine: Parsing certificate...
	I0803 17:58:59.184327    4056 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1166/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 17:58:59.348070    4056 main.go:141] libmachine: Creating SSH key...
	I0803 17:58:59.522403    4056 main.go:141] libmachine: Creating Disk image...
	I0803 17:58:59.522410    4056 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 17:58:59.522624    4056 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/force-systemd-flag-711000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/force-systemd-flag-711000/disk.qcow2
	I0803 17:58:59.532224    4056 main.go:141] libmachine: STDOUT: 
	I0803 17:58:59.532243    4056 main.go:141] libmachine: STDERR: 
	I0803 17:58:59.532289    4056 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/force-systemd-flag-711000/disk.qcow2 +20000M
	I0803 17:58:59.540191    4056 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 17:58:59.540205    4056 main.go:141] libmachine: STDERR: 
	I0803 17:58:59.540215    4056 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/force-systemd-flag-711000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/force-systemd-flag-711000/disk.qcow2
	I0803 17:58:59.540221    4056 main.go:141] libmachine: Starting QEMU VM...
	I0803 17:58:59.540233    4056 qemu.go:418] Using hvf for hardware acceleration
	I0803 17:58:59.540268    4056 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/force-systemd-flag-711000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/force-systemd-flag-711000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/force-systemd-flag-711000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:e2:8e:09:08:10 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/force-systemd-flag-711000/disk.qcow2
	I0803 17:58:59.541898    4056 main.go:141] libmachine: STDOUT: 
	I0803 17:58:59.541922    4056 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 17:58:59.541933    4056 client.go:171] duration metric: took 360.043ms to LocalClient.Create
	I0803 17:59:01.544084    4056 start.go:128] duration metric: took 2.410619667s to createHost
	I0803 17:59:01.544133    4056 start.go:83] releasing machines lock for "force-systemd-flag-711000", held for 2.410913125s
	W0803 17:59:01.544469    4056 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-711000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-711000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 17:59:01.557925    4056 out.go:177] 
	W0803 17:59:01.569639    4056 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 17:59:01.569680    4056 out.go:239] * 
	* 
	W0803 17:59:01.572102    4056 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 17:59:01.577159    4056 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-711000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-711000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-711000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (81.397541ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-711000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-711000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-711000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-08-03 17:59:01.677372 -0700 PDT m=+2346.917590168
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-711000 -n force-systemd-flag-711000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-711000 -n force-systemd-flag-711000: exit status 7 (34.111667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-711000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-711000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-711000
--- FAIL: TestForceSystemdFlag (10.18s)

                                                
                                    
x
+
TestForceSystemdEnv (10.65s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-336000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-336000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.459119084s)

                                                
                                                
-- stdout --
	* [force-systemd-env-336000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-336000" primary control-plane node in "force-systemd-env-336000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-336000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 17:58:46.084515    4024 out.go:291] Setting OutFile to fd 1 ...
	I0803 17:58:46.084650    4024 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 17:58:46.084653    4024 out.go:304] Setting ErrFile to fd 2...
	I0803 17:58:46.084655    4024 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 17:58:46.084797    4024 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1166/.minikube/bin
	I0803 17:58:46.085887    4024 out.go:298] Setting JSON to false
	I0803 17:58:46.102284    4024 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3490,"bootTime":1722729636,"procs":454,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 17:58:46.102362    4024 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 17:58:46.108478    4024 out.go:177] * [force-systemd-env-336000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0803 17:58:46.116432    4024 notify.go:220] Checking for updates...
	I0803 17:58:46.122394    4024 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 17:58:46.130474    4024 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	I0803 17:58:46.137383    4024 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 17:58:46.145384    4024 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 17:58:46.153276    4024 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	I0803 17:58:46.161375    4024 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0803 17:58:46.164758    4024 config.go:182] Loaded profile config "multinode-483000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 17:58:46.164798    4024 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 17:58:46.168359    4024 out.go:177] * Using the qemu2 driver based on user configuration
	I0803 17:58:46.175429    4024 start.go:297] selected driver: qemu2
	I0803 17:58:46.175437    4024 start.go:901] validating driver "qemu2" against <nil>
	I0803 17:58:46.175446    4024 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 17:58:46.177913    4024 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0803 17:58:46.182404    4024 out.go:177] * Automatically selected the socket_vmnet network
	I0803 17:58:46.185552    4024 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0803 17:58:46.185599    4024 cni.go:84] Creating CNI manager for ""
	I0803 17:58:46.185608    4024 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0803 17:58:46.185612    4024 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0803 17:58:46.185657    4024 start.go:340] cluster config:
	{Name:force-systemd-env-336000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-336000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 17:58:46.189417    4024 iso.go:125] acquiring lock: {Name:mk5fb439c48468a4ea1254631a9a137d32075713 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 17:58:46.193423    4024 out.go:177] * Starting "force-systemd-env-336000" primary control-plane node in "force-systemd-env-336000" cluster
	I0803 17:58:46.197431    4024 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0803 17:58:46.197447    4024 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0803 17:58:46.197463    4024 cache.go:56] Caching tarball of preloaded images
	I0803 17:58:46.197560    4024 preload.go:172] Found /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0803 17:58:46.197573    4024 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0803 17:58:46.197635    4024 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/force-systemd-env-336000/config.json ...
	I0803 17:58:46.197647    4024 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/force-systemd-env-336000/config.json: {Name:mk91eb0be8ff19c6724b639401d4dbf334e06801 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 17:58:46.197911    4024 start.go:360] acquireMachinesLock for force-systemd-env-336000: {Name:mk8118ba0191912f8ddd157ebc629e1eb9051406 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 17:58:46.197952    4024 start.go:364] duration metric: took 34.458µs to acquireMachinesLock for "force-systemd-env-336000"
	I0803 17:58:46.197965    4024 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-336000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-336000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 17:58:46.198000    4024 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 17:58:46.205416    4024 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0803 17:58:46.223462    4024 start.go:159] libmachine.API.Create for "force-systemd-env-336000" (driver="qemu2")
	I0803 17:58:46.223490    4024 client.go:168] LocalClient.Create starting
	I0803 17:58:46.223558    4024 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/ca.pem
	I0803 17:58:46.223588    4024 main.go:141] libmachine: Decoding PEM data...
	I0803 17:58:46.223598    4024 main.go:141] libmachine: Parsing certificate...
	I0803 17:58:46.223632    4024 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/cert.pem
	I0803 17:58:46.223657    4024 main.go:141] libmachine: Decoding PEM data...
	I0803 17:58:46.223670    4024 main.go:141] libmachine: Parsing certificate...
	I0803 17:58:46.224030    4024 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1166/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 17:58:46.381485    4024 main.go:141] libmachine: Creating SSH key...
	I0803 17:58:46.431652    4024 main.go:141] libmachine: Creating Disk image...
	I0803 17:58:46.431657    4024 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 17:58:46.431835    4024 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/force-systemd-env-336000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/force-systemd-env-336000/disk.qcow2
	I0803 17:58:46.441244    4024 main.go:141] libmachine: STDOUT: 
	I0803 17:58:46.441264    4024 main.go:141] libmachine: STDERR: 
	I0803 17:58:46.441318    4024 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/force-systemd-env-336000/disk.qcow2 +20000M
	I0803 17:58:46.449541    4024 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 17:58:46.449556    4024 main.go:141] libmachine: STDERR: 
	I0803 17:58:46.449579    4024 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/force-systemd-env-336000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/force-systemd-env-336000/disk.qcow2
	I0803 17:58:46.449584    4024 main.go:141] libmachine: Starting QEMU VM...
	I0803 17:58:46.449598    4024 qemu.go:418] Using hvf for hardware acceleration
	I0803 17:58:46.449622    4024 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/force-systemd-env-336000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/force-systemd-env-336000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/force-systemd-env-336000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:25:a0:0f:42:ac -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/force-systemd-env-336000/disk.qcow2
	I0803 17:58:46.451258    4024 main.go:141] libmachine: STDOUT: 
	I0803 17:58:46.451275    4024 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 17:58:46.451293    4024 client.go:171] duration metric: took 227.804959ms to LocalClient.Create
	I0803 17:58:48.453329    4024 start.go:128] duration metric: took 2.255382917s to createHost
	I0803 17:58:48.453348    4024 start.go:83] releasing machines lock for "force-systemd-env-336000", held for 2.25546125s
	W0803 17:58:48.453362    4024 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 17:58:48.463814    4024 out.go:177] * Deleting "force-systemd-env-336000" in qemu2 ...
	W0803 17:58:48.475036    4024 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 17:58:48.475043    4024 start.go:729] Will try again in 5 seconds ...
	I0803 17:58:53.477197    4024 start.go:360] acquireMachinesLock for force-systemd-env-336000: {Name:mk8118ba0191912f8ddd157ebc629e1eb9051406 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 17:58:54.087148    4024 start.go:364] duration metric: took 609.831542ms to acquireMachinesLock for "force-systemd-env-336000"
	I0803 17:58:54.087304    4024 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-336000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-336000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 17:58:54.087584    4024 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 17:58:54.100089    4024 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0803 17:58:54.149115    4024 start.go:159] libmachine.API.Create for "force-systemd-env-336000" (driver="qemu2")
	I0803 17:58:54.149164    4024 client.go:168] LocalClient.Create starting
	I0803 17:58:54.149300    4024 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/ca.pem
	I0803 17:58:54.149371    4024 main.go:141] libmachine: Decoding PEM data...
	I0803 17:58:54.149393    4024 main.go:141] libmachine: Parsing certificate...
	I0803 17:58:54.149460    4024 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/cert.pem
	I0803 17:58:54.149510    4024 main.go:141] libmachine: Decoding PEM data...
	I0803 17:58:54.149524    4024 main.go:141] libmachine: Parsing certificate...
	I0803 17:58:54.150121    4024 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1166/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 17:58:54.329862    4024 main.go:141] libmachine: Creating SSH key...
	I0803 17:58:54.441960    4024 main.go:141] libmachine: Creating Disk image...
	I0803 17:58:54.441965    4024 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 17:58:54.442175    4024 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/force-systemd-env-336000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/force-systemd-env-336000/disk.qcow2
	I0803 17:58:54.451555    4024 main.go:141] libmachine: STDOUT: 
	I0803 17:58:54.451573    4024 main.go:141] libmachine: STDERR: 
	I0803 17:58:54.451630    4024 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/force-systemd-env-336000/disk.qcow2 +20000M
	I0803 17:58:54.459493    4024 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 17:58:54.459507    4024 main.go:141] libmachine: STDERR: 
	I0803 17:58:54.459520    4024 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/force-systemd-env-336000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/force-systemd-env-336000/disk.qcow2
	I0803 17:58:54.459524    4024 main.go:141] libmachine: Starting QEMU VM...
	I0803 17:58:54.459536    4024 qemu.go:418] Using hvf for hardware acceleration
	I0803 17:58:54.459558    4024 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/force-systemd-env-336000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/force-systemd-env-336000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/force-systemd-env-336000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:42:88:9a:b3:18 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/force-systemd-env-336000/disk.qcow2
	I0803 17:58:54.461229    4024 main.go:141] libmachine: STDOUT: 
	I0803 17:58:54.461248    4024 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 17:58:54.461261    4024 client.go:171] duration metric: took 312.102375ms to LocalClient.Create
	I0803 17:58:56.463374    4024 start.go:128] duration metric: took 2.375804792s to createHost
	I0803 17:58:56.463421    4024 start.go:83] releasing machines lock for "force-systemd-env-336000", held for 2.376305s
	W0803 17:58:56.463771    4024 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-336000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-336000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 17:58:56.481477    4024 out.go:177] 
	W0803 17:58:56.489319    4024 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 17:58:56.489340    4024 out.go:239] * 
	* 
	W0803 17:58:56.491093    4024 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 17:58:56.500271    4024 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-336000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-336000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-336000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (78.636042ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-336000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-336000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-336000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-08-03 17:58:56.597501 -0700 PDT m=+2341.837560043
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-336000 -n force-systemd-env-336000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-336000 -n force-systemd-env-336000: exit status 7 (34.567875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-336000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-336000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-336000
--- FAIL: TestForceSystemdEnv (10.65s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (29.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-959000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-959000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-6f49f58cd5-kpf5r" [7409f7b9-271c-41e7-acc0-ea2ca3c3b5cb] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-6f49f58cd5-kpf5r" [7409f7b9-271c-41e7-acc0-ea2ca3c3b5cb] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.003519458s
functional_test.go:1645: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.105.4:32228
functional_test.go:1657: error fetching http://192.168.105.4:32228: Get "http://192.168.105.4:32228": dial tcp 192.168.105.4:32228: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:32228: Get "http://192.168.105.4:32228": dial tcp 192.168.105.4:32228: connect: connection refused
E0803 17:31:49.678469    1673 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/addons-989000/client.crt: no such file or directory
functional_test.go:1657: error fetching http://192.168.105.4:32228: Get "http://192.168.105.4:32228": dial tcp 192.168.105.4:32228: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:32228: Get "http://192.168.105.4:32228": dial tcp 192.168.105.4:32228: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:32228: Get "http://192.168.105.4:32228": dial tcp 192.168.105.4:32228: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:32228: Get "http://192.168.105.4:32228": dial tcp 192.168.105.4:32228: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:32228: Get "http://192.168.105.4:32228": dial tcp 192.168.105.4:32228: connect: connection refused
functional_test.go:1677: failed to fetch http://192.168.105.4:32228: Get "http://192.168.105.4:32228": dial tcp 192.168.105.4:32228: connect: connection refused
functional_test.go:1594: service test failed - dumping debug information
functional_test.go:1595: -----------------------service failure post-mortem--------------------------------
functional_test.go:1598: (dbg) Run:  kubectl --context functional-959000 describe po hello-node-connect
functional_test.go:1602: hello-node pod describe:
Name:             hello-node-connect-6f49f58cd5-kpf5r
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-959000/192.168.105.4
Start Time:       Sat, 03 Aug 2024 17:31:39 -0700
Labels:           app=hello-node-connect
pod-template-hash=6f49f58cd5
Annotations:      <none>
Status:           Running
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-connect-6f49f58cd5
Containers:
echoserver-arm:
Container ID:   docker://6e12ab0f9d1687f541a7ba55d5846acedfaaa4e9caafb39580fc8e10df9272a7
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Terminated
Reason:       Error
Exit Code:    1
Started:      Sat, 03 Aug 2024 17:31:56 -0700
Finished:     Sat, 03 Aug 2024 17:31:57 -0700
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Sat, 03 Aug 2024 17:31:41 -0700
Finished:     Sat, 03 Aug 2024 17:31:41 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7blcd (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-7blcd:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  28s                default-scheduler  Successfully assigned default/hello-node-connect-6f49f58cd5-kpf5r to functional-959000
Normal   Pulled     12s (x3 over 28s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Normal   Created    12s (x3 over 28s)  kubelet            Created container echoserver-arm
Normal   Started    11s (x3 over 28s)  kubelet            Started container echoserver-arm
Warning  BackOff    11s (x3 over 26s)  kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-6f49f58cd5-kpf5r_default(7409f7b9-271c-41e7-acc0-ea2ca3c3b5cb)

                                                
                                                
functional_test.go:1604: (dbg) Run:  kubectl --context functional-959000 logs -l app=hello-node-connect
functional_test.go:1608: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1610: (dbg) Run:  kubectl --context functional-959000 describe svc hello-node-connect
functional_test.go:1614: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.102.86.142
IPs:                      10.102.86.142
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32228/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-959000 -n functional-959000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                                         Args                                                         |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| service | functional-959000 service list                                                                                       | functional-959000 | jenkins | v1.33.1 | 03 Aug 24 17:31 PDT | 03 Aug 24 17:31 PDT |
	| service | functional-959000 service list                                                                                       | functional-959000 | jenkins | v1.33.1 | 03 Aug 24 17:31 PDT | 03 Aug 24 17:31 PDT |
	|         | -o json                                                                                                              |                   |         |         |                     |                     |
	| service | functional-959000 service                                                                                            | functional-959000 | jenkins | v1.33.1 | 03 Aug 24 17:31 PDT | 03 Aug 24 17:31 PDT |
	|         | --namespace=default --https                                                                                          |                   |         |         |                     |                     |
	|         | --url hello-node                                                                                                     |                   |         |         |                     |                     |
	| service | functional-959000                                                                                                    | functional-959000 | jenkins | v1.33.1 | 03 Aug 24 17:31 PDT | 03 Aug 24 17:31 PDT |
	|         | service hello-node --url                                                                                             |                   |         |         |                     |                     |
	|         | --format={{.IP}}                                                                                                     |                   |         |         |                     |                     |
	| service | functional-959000 service                                                                                            | functional-959000 | jenkins | v1.33.1 | 03 Aug 24 17:31 PDT | 03 Aug 24 17:31 PDT |
	|         | hello-node --url                                                                                                     |                   |         |         |                     |                     |
	| addons  | functional-959000 addons list                                                                                        | functional-959000 | jenkins | v1.33.1 | 03 Aug 24 17:31 PDT | 03 Aug 24 17:31 PDT |
	| addons  | functional-959000 addons list                                                                                        | functional-959000 | jenkins | v1.33.1 | 03 Aug 24 17:31 PDT | 03 Aug 24 17:31 PDT |
	|         | -o json                                                                                                              |                   |         |         |                     |                     |
	| service | functional-959000 service                                                                                            | functional-959000 | jenkins | v1.33.1 | 03 Aug 24 17:31 PDT | 03 Aug 24 17:31 PDT |
	|         | hello-node-connect --url                                                                                             |                   |         |         |                     |                     |
	| ssh     | functional-959000 ssh findmnt                                                                                        | functional-959000 | jenkins | v1.33.1 | 03 Aug 24 17:32 PDT |                     |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| mount   | -p functional-959000                                                                                                 | functional-959000 | jenkins | v1.33.1 | 03 Aug 24 17:32 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port4082139386/001:/mount-9p      |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-959000 ssh findmnt                                                                                        | functional-959000 | jenkins | v1.33.1 | 03 Aug 24 17:32 PDT | 03 Aug 24 17:32 PDT |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-959000 ssh -- ls                                                                                          | functional-959000 | jenkins | v1.33.1 | 03 Aug 24 17:32 PDT | 03 Aug 24 17:32 PDT |
	|         | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh     | functional-959000 ssh cat                                                                                            | functional-959000 | jenkins | v1.33.1 | 03 Aug 24 17:32 PDT | 03 Aug 24 17:32 PDT |
	|         | /mount-9p/test-1722731522148578000                                                                                   |                   |         |         |                     |                     |
	| ssh     | functional-959000 ssh stat                                                                                           | functional-959000 | jenkins | v1.33.1 | 03 Aug 24 17:32 PDT | 03 Aug 24 17:32 PDT |
	|         | /mount-9p/created-by-test                                                                                            |                   |         |         |                     |                     |
	| ssh     | functional-959000 ssh stat                                                                                           | functional-959000 | jenkins | v1.33.1 | 03 Aug 24 17:32 PDT | 03 Aug 24 17:32 PDT |
	|         | /mount-9p/created-by-pod                                                                                             |                   |         |         |                     |                     |
	| ssh     | functional-959000 ssh sudo                                                                                           | functional-959000 | jenkins | v1.33.1 | 03 Aug 24 17:32 PDT | 03 Aug 24 17:32 PDT |
	|         | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| mount   | -p functional-959000                                                                                                 | functional-959000 | jenkins | v1.33.1 | 03 Aug 24 17:32 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port1080122501/001:/mount-9p |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1 --port 46464                                                                                  |                   |         |         |                     |                     |
	| ssh     | functional-959000 ssh findmnt                                                                                        | functional-959000 | jenkins | v1.33.1 | 03 Aug 24 17:32 PDT |                     |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-959000 ssh findmnt                                                                                        | functional-959000 | jenkins | v1.33.1 | 03 Aug 24 17:32 PDT | 03 Aug 24 17:32 PDT |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-959000 ssh -- ls                                                                                          | functional-959000 | jenkins | v1.33.1 | 03 Aug 24 17:32 PDT | 03 Aug 24 17:32 PDT |
	|         | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh     | functional-959000 ssh sudo                                                                                           | functional-959000 | jenkins | v1.33.1 | 03 Aug 24 17:32 PDT |                     |
	|         | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| mount   | -p functional-959000                                                                                                 | functional-959000 | jenkins | v1.33.1 | 03 Aug 24 17:32 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3207932887/001:/mount1   |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount   | -p functional-959000                                                                                                 | functional-959000 | jenkins | v1.33.1 | 03 Aug 24 17:32 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3207932887/001:/mount2   |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount   | -p functional-959000                                                                                                 | functional-959000 | jenkins | v1.33.1 | 03 Aug 24 17:32 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3207932887/001:/mount3   |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-959000 ssh findmnt                                                                                        | functional-959000 | jenkins | v1.33.1 | 03 Aug 24 17:32 PDT |                     |
	|         | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/03 17:30:17
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0803 17:30:17.511002    2470 out.go:291] Setting OutFile to fd 1 ...
	I0803 17:30:17.511120    2470 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 17:30:17.511122    2470 out.go:304] Setting ErrFile to fd 2...
	I0803 17:30:17.511124    2470 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 17:30:17.511257    2470 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1166/.minikube/bin
	I0803 17:30:17.512343    2470 out.go:298] Setting JSON to false
	I0803 17:30:17.528454    2470 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1781,"bootTime":1722729636,"procs":454,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 17:30:17.528518    2470 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 17:30:17.532909    2470 out.go:177] * [functional-959000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0803 17:30:17.540035    2470 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 17:30:17.540098    2470 notify.go:220] Checking for updates...
	I0803 17:30:17.545998    2470 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	I0803 17:30:17.549031    2470 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 17:30:17.551996    2470 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 17:30:17.554978    2470 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	I0803 17:30:17.558030    2470 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 17:30:17.561260    2470 config.go:182] Loaded profile config "functional-959000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 17:30:17.561309    2470 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 17:30:17.565978    2470 out.go:177] * Using the qemu2 driver based on existing profile
	I0803 17:30:17.572916    2470 start.go:297] selected driver: qemu2
	I0803 17:30:17.572920    2470 start.go:901] validating driver "qemu2" against &{Name:functional-959000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-959000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 17:30:17.572993    2470 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 17:30:17.575271    2470 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0803 17:30:17.575308    2470 cni.go:84] Creating CNI manager for ""
	I0803 17:30:17.575315    2470 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0803 17:30:17.575348    2470 start.go:340] cluster config:
	{Name:functional-959000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-959000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 17:30:17.578616    2470 iso.go:125] acquiring lock: {Name:mk5fb439c48468a4ea1254631a9a137d32075713 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 17:30:17.586000    2470 out.go:177] * Starting "functional-959000" primary control-plane node in "functional-959000" cluster
	I0803 17:30:17.589937    2470 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0803 17:30:17.589952    2470 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0803 17:30:17.589958    2470 cache.go:56] Caching tarball of preloaded images
	I0803 17:30:17.590029    2470 preload.go:172] Found /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0803 17:30:17.590044    2470 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0803 17:30:17.590090    2470 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/functional-959000/config.json ...
	I0803 17:30:17.590425    2470 start.go:360] acquireMachinesLock for functional-959000: {Name:mk8118ba0191912f8ddd157ebc629e1eb9051406 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 17:30:17.590455    2470 start.go:364] duration metric: took 25.875µs to acquireMachinesLock for "functional-959000"
	I0803 17:30:17.590461    2470 start.go:96] Skipping create...Using existing machine configuration
	I0803 17:30:17.590466    2470 fix.go:54] fixHost starting: 
	I0803 17:30:17.591033    2470 fix.go:112] recreateIfNeeded on functional-959000: state=Running err=<nil>
	W0803 17:30:17.591039    2470 fix.go:138] unexpected machine state, will restart: <nil>
	I0803 17:30:17.597996    2470 out.go:177] * Updating the running qemu2 "functional-959000" VM ...
	I0803 17:30:17.601965    2470 machine.go:94] provisionDockerMachine start ...
	I0803 17:30:17.602000    2470 main.go:141] libmachine: Using SSH client type: native
	I0803 17:30:17.602106    2470 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10308aa10] 0x10308d270 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0803 17:30:17.602108    2470 main.go:141] libmachine: About to run SSH command:
	hostname
	I0803 17:30:17.645959    2470 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-959000
	
	I0803 17:30:17.645968    2470 buildroot.go:166] provisioning hostname "functional-959000"
	I0803 17:30:17.646002    2470 main.go:141] libmachine: Using SSH client type: native
	I0803 17:30:17.646109    2470 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10308aa10] 0x10308d270 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0803 17:30:17.646112    2470 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-959000 && echo "functional-959000" | sudo tee /etc/hostname
	I0803 17:30:17.691942    2470 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-959000
	
	I0803 17:30:17.691994    2470 main.go:141] libmachine: Using SSH client type: native
	I0803 17:30:17.692100    2470 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10308aa10] 0x10308d270 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0803 17:30:17.692106    2470 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-959000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-959000/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-959000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0803 17:30:17.733219    2470 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0803 17:30:17.733226    2470 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19364-1166/.minikube CaCertPath:/Users/jenkins/minikube-integration/19364-1166/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19364-1166/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19364-1166/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19364-1166/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19364-1166/.minikube}
	I0803 17:30:17.733232    2470 buildroot.go:174] setting up certificates
	I0803 17:30:17.733236    2470 provision.go:84] configureAuth start
	I0803 17:30:17.733238    2470 provision.go:143] copyHostCerts
	I0803 17:30:17.733297    2470 exec_runner.go:144] found /Users/jenkins/minikube-integration/19364-1166/.minikube/ca.pem, removing ...
	I0803 17:30:17.733301    2470 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19364-1166/.minikube/ca.pem
	I0803 17:30:17.733436    2470 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19364-1166/.minikube/ca.pem (1082 bytes)
	I0803 17:30:17.733635    2470 exec_runner.go:144] found /Users/jenkins/minikube-integration/19364-1166/.minikube/cert.pem, removing ...
	I0803 17:30:17.733637    2470 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19364-1166/.minikube/cert.pem
	I0803 17:30:17.733800    2470 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19364-1166/.minikube/cert.pem (1123 bytes)
	I0803 17:30:17.733946    2470 exec_runner.go:144] found /Users/jenkins/minikube-integration/19364-1166/.minikube/key.pem, removing ...
	I0803 17:30:17.733948    2470 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19364-1166/.minikube/key.pem
	I0803 17:30:17.734005    2470 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19364-1166/.minikube/key.pem (1675 bytes)
	I0803 17:30:17.734103    2470 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19364-1166/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19364-1166/.minikube/certs/ca-key.pem org=jenkins.functional-959000 san=[127.0.0.1 192.168.105.4 functional-959000 localhost minikube]
	I0803 17:30:17.789470    2470 provision.go:177] copyRemoteCerts
	I0803 17:30:17.789509    2470 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0803 17:30:17.789515    2470 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/functional-959000/id_rsa Username:docker}
	I0803 17:30:17.812994    2470 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0803 17:30:17.821769    2470 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0803 17:30:17.830055    2470 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0803 17:30:17.838569    2470 provision.go:87] duration metric: took 105.32725ms to configureAuth
	I0803 17:30:17.838575    2470 buildroot.go:189] setting minikube options for container-runtime
	I0803 17:30:17.838674    2470 config.go:182] Loaded profile config "functional-959000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 17:30:17.838706    2470 main.go:141] libmachine: Using SSH client type: native
	I0803 17:30:17.838786    2470 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10308aa10] 0x10308d270 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0803 17:30:17.838789    2470 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0803 17:30:17.880815    2470 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0803 17:30:17.880823    2470 buildroot.go:70] root file system type: tmpfs
	I0803 17:30:17.880876    2470 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0803 17:30:17.880930    2470 main.go:141] libmachine: Using SSH client type: native
	I0803 17:30:17.881041    2470 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10308aa10] 0x10308d270 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0803 17:30:17.881072    2470 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0803 17:30:17.927379    2470 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0803 17:30:17.927426    2470 main.go:141] libmachine: Using SSH client type: native
	I0803 17:30:17.927536    2470 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10308aa10] 0x10308d270 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0803 17:30:17.927543    2470 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0803 17:30:17.970635    2470 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0803 17:30:17.970641    2470 machine.go:97] duration metric: took 368.681917ms to provisionDockerMachine
	I0803 17:30:17.970650    2470 start.go:293] postStartSetup for "functional-959000" (driver="qemu2")
	I0803 17:30:17.970655    2470 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0803 17:30:17.970703    2470 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0803 17:30:17.970709    2470 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/functional-959000/id_rsa Username:docker}
	I0803 17:30:17.995726    2470 ssh_runner.go:195] Run: cat /etc/os-release
	I0803 17:30:17.997254    2470 info.go:137] Remote host: Buildroot 2023.02.9
	I0803 17:30:17.997259    2470 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19364-1166/.minikube/addons for local assets ...
	I0803 17:30:17.997349    2470 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19364-1166/.minikube/files for local assets ...
	I0803 17:30:17.997473    2470 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19364-1166/.minikube/files/etc/ssl/certs/16732.pem -> 16732.pem in /etc/ssl/certs
	I0803 17:30:17.997589    2470 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19364-1166/.minikube/files/etc/test/nested/copy/1673/hosts -> hosts in /etc/test/nested/copy/1673
	I0803 17:30:17.997640    2470 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1673
	I0803 17:30:18.000993    2470 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1166/.minikube/files/etc/ssl/certs/16732.pem --> /etc/ssl/certs/16732.pem (1708 bytes)
	I0803 17:30:18.009024    2470 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1166/.minikube/files/etc/test/nested/copy/1673/hosts --> /etc/test/nested/copy/1673/hosts (40 bytes)
	I0803 17:30:18.017325    2470 start.go:296] duration metric: took 46.670584ms for postStartSetup
	I0803 17:30:18.017344    2470 fix.go:56] duration metric: took 426.888542ms for fixHost
	I0803 17:30:18.017377    2470 main.go:141] libmachine: Using SSH client type: native
	I0803 17:30:18.017478    2470 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10308aa10] 0x10308d270 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0803 17:30:18.017481    2470 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0803 17:30:18.059028    2470 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722731418.089818153
	
	I0803 17:30:18.059032    2470 fix.go:216] guest clock: 1722731418.089818153
	I0803 17:30:18.059036    2470 fix.go:229] Guest: 2024-08-03 17:30:18.089818153 -0700 PDT Remote: 2024-08-03 17:30:18.017345 -0700 PDT m=+0.526655376 (delta=72.473153ms)
	I0803 17:30:18.059044    2470 fix.go:200] guest clock delta is within tolerance: 72.473153ms
	I0803 17:30:18.059046    2470 start.go:83] releasing machines lock for "functional-959000", held for 468.599625ms
	I0803 17:30:18.059356    2470 ssh_runner.go:195] Run: cat /version.json
	I0803 17:30:18.059362    2470 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0803 17:30:18.059361    2470 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/functional-959000/id_rsa Username:docker}
	I0803 17:30:18.059376    2470 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/functional-959000/id_rsa Username:docker}
	I0803 17:30:18.082494    2470 ssh_runner.go:195] Run: systemctl --version
	I0803 17:30:18.085022    2470 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0803 17:30:18.129755    2470 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0803 17:30:18.129787    2470 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0803 17:30:18.133053    2470 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0803 17:30:18.133059    2470 start.go:495] detecting cgroup driver to use...
	I0803 17:30:18.133129    2470 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0803 17:30:18.140432    2470 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0803 17:30:18.144676    2470 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0803 17:30:18.148742    2470 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0803 17:30:18.148769    2470 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0803 17:30:18.152765    2470 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0803 17:30:18.156479    2470 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0803 17:30:18.160312    2470 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0803 17:30:18.164412    2470 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0803 17:30:18.168609    2470 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0803 17:30:18.172174    2470 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0803 17:30:18.176209    2470 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0803 17:30:18.180588    2470 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0803 17:30:18.184015    2470 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0803 17:30:18.187656    2470 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 17:30:18.284737    2470 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0803 17:30:18.296187    2470 start.go:495] detecting cgroup driver to use...
	I0803 17:30:18.296245    2470 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0803 17:30:18.302476    2470 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0803 17:30:18.308155    2470 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0803 17:30:18.315783    2470 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0803 17:30:18.321664    2470 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0803 17:30:18.327026    2470 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0803 17:30:18.333418    2470 ssh_runner.go:195] Run: which cri-dockerd
	I0803 17:30:18.335150    2470 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0803 17:30:18.338370    2470 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0803 17:30:18.344632    2470 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0803 17:30:18.435133    2470 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0803 17:30:18.525759    2470 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0803 17:30:18.525805    2470 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0803 17:30:18.532445    2470 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 17:30:18.625488    2470 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0803 17:30:30.914637    2470 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.28941925s)
	I0803 17:30:30.914715    2470 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0803 17:30:30.922276    2470 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0803 17:30:30.930889    2470 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0803 17:30:30.936996    2470 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0803 17:30:31.013965    2470 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0803 17:30:31.107411    2470 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 17:30:31.182149    2470 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0803 17:30:31.189064    2470 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0803 17:30:31.197522    2470 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 17:30:31.273518    2470 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0803 17:30:31.304162    2470 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0803 17:30:31.304234    2470 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0803 17:30:31.306503    2470 start.go:563] Will wait 60s for crictl version
	I0803 17:30:31.306542    2470 ssh_runner.go:195] Run: which crictl
	I0803 17:30:31.308144    2470 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0803 17:30:31.320531    2470 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.1
	RuntimeApiVersion:  v1
	I0803 17:30:31.320601    2470 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0803 17:30:31.334019    2470 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0803 17:30:31.344585    2470 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
	I0803 17:30:31.344740    2470 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0803 17:30:31.352375    2470 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0803 17:30:31.356404    2470 kubeadm.go:883] updating cluster {Name:functional-959000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.3 ClusterName:functional-959000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0803 17:30:31.356444    2470 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0803 17:30:31.356489    2470 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0803 17:30:31.361985    2470 docker.go:685] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-959000
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0803 17:30:31.361990    2470 docker.go:615] Images already preloaded, skipping extraction
	I0803 17:30:31.362040    2470 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0803 17:30:31.367479    2470 docker.go:685] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-959000
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0803 17:30:31.367484    2470 cache_images.go:84] Images are preloaded, skipping loading
	I0803 17:30:31.367488    2470 kubeadm.go:934] updating node { 192.168.105.4 8441 v1.30.3 docker true true} ...
	I0803 17:30:31.367548    2470 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-959000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:functional-959000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0803 17:30:31.367595    2470 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0803 17:30:31.383371    2470 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0803 17:30:31.383419    2470 cni.go:84] Creating CNI manager for ""
	I0803 17:30:31.383426    2470 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0803 17:30:31.383430    2470 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0803 17:30:31.383439    2470 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.4 APIServerPort:8441 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-959000 NodeName:functional-959000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOp
ts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0803 17:30:31.383491    2470 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.4
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-959000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.4
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.4"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0803 17:30:31.383553    2470 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0803 17:30:31.387643    2470 binaries.go:44] Found k8s binaries, skipping transfer
	I0803 17:30:31.387670    2470 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0803 17:30:31.391413    2470 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0803 17:30:31.397213    2470 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0803 17:30:31.403289    2470 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2012 bytes)
	I0803 17:30:31.409301    2470 ssh_runner.go:195] Run: grep 192.168.105.4	control-plane.minikube.internal$ /etc/hosts
	I0803 17:30:31.410743    2470 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 17:30:31.484646    2470 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0803 17:30:31.490536    2470 certs.go:68] Setting up /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/functional-959000 for IP: 192.168.105.4
	I0803 17:30:31.490539    2470 certs.go:194] generating shared ca certs ...
	I0803 17:30:31.490546    2470 certs.go:226] acquiring lock for ca certs: {Name:mk4c6ee72dd2b768bec67e582e0b6b1af1b504e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 17:30:31.490703    2470 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19364-1166/.minikube/ca.key
	I0803 17:30:31.490764    2470 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19364-1166/.minikube/proxy-client-ca.key
	I0803 17:30:31.490767    2470 certs.go:256] generating profile certs ...
	I0803 17:30:31.490841    2470 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/functional-959000/client.key
	I0803 17:30:31.490896    2470 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/functional-959000/apiserver.key.0e6e7a5c
	I0803 17:30:31.490946    2470 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/functional-959000/proxy-client.key
	I0803 17:30:31.491110    2470 certs.go:484] found cert: /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/1673.pem (1338 bytes)
	W0803 17:30:31.491140    2470 certs.go:480] ignoring /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/1673_empty.pem, impossibly tiny 0 bytes
	I0803 17:30:31.491149    2470 certs.go:484] found cert: /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/ca-key.pem (1675 bytes)
	I0803 17:30:31.491166    2470 certs.go:484] found cert: /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/ca.pem (1082 bytes)
	I0803 17:30:31.491187    2470 certs.go:484] found cert: /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/cert.pem (1123 bytes)
	I0803 17:30:31.491202    2470 certs.go:484] found cert: /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/key.pem (1675 bytes)
	I0803 17:30:31.491245    2470 certs.go:484] found cert: /Users/jenkins/minikube-integration/19364-1166/.minikube/files/etc/ssl/certs/16732.pem (1708 bytes)
	I0803 17:30:31.491606    2470 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1166/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0803 17:30:31.500622    2470 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1166/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0803 17:30:31.509033    2470 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1166/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0803 17:30:31.517134    2470 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1166/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0803 17:30:31.525349    2470 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/functional-959000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0803 17:30:31.533494    2470 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/functional-959000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0803 17:30:31.541479    2470 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/functional-959000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0803 17:30:31.549769    2470 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/functional-959000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0803 17:30:31.558013    2470 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1166/.minikube/files/etc/ssl/certs/16732.pem --> /usr/share/ca-certificates/16732.pem (1708 bytes)
	I0803 17:30:31.566025    2470 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1166/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0803 17:30:31.574109    2470 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/1673.pem --> /usr/share/ca-certificates/1673.pem (1338 bytes)
	I0803 17:30:31.582518    2470 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0803 17:30:31.588385    2470 ssh_runner.go:195] Run: openssl version
	I0803 17:30:31.590613    2470 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0803 17:30:31.594276    2470 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0803 17:30:31.595881    2470 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  4 00:21 /usr/share/ca-certificates/minikubeCA.pem
	I0803 17:30:31.595903    2470 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0803 17:30:31.598278    2470 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0803 17:30:31.601589    2470 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1673.pem && ln -fs /usr/share/ca-certificates/1673.pem /etc/ssl/certs/1673.pem"
	I0803 17:30:31.605372    2470 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1673.pem
	I0803 17:30:31.606934    2470 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  4 00:28 /usr/share/ca-certificates/1673.pem
	I0803 17:30:31.606949    2470 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1673.pem
	I0803 17:30:31.609022    2470 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1673.pem /etc/ssl/certs/51391683.0"
	I0803 17:30:31.612610    2470 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16732.pem && ln -fs /usr/share/ca-certificates/16732.pem /etc/ssl/certs/16732.pem"
	I0803 17:30:31.616654    2470 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16732.pem
	I0803 17:30:31.618288    2470 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  4 00:28 /usr/share/ca-certificates/16732.pem
	I0803 17:30:31.618305    2470 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16732.pem
	I0803 17:30:31.620365    2470 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16732.pem /etc/ssl/certs/3ec20f2e.0"
	I0803 17:30:31.624167    2470 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0803 17:30:31.625958    2470 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0803 17:30:31.628032    2470 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0803 17:30:31.630124    2470 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0803 17:30:31.632088    2470 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0803 17:30:31.634240    2470 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0803 17:30:31.636315    2470 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0803 17:30:31.638444    2470 kubeadm.go:392] StartCluster: {Name:functional-959000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.3 ClusterName:functional-959000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Moun
t:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 17:30:31.638507    2470 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0803 17:30:31.644547    2470 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0803 17:30:31.648517    2470 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0803 17:30:31.648520    2470 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0803 17:30:31.648543    2470 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0803 17:30:31.652198    2470 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0803 17:30:31.652530    2470 kubeconfig.go:125] found "functional-959000" server: "https://192.168.105.4:8441"
	I0803 17:30:31.653124    2470 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0803 17:30:31.656741    2470 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.105.4"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I0803 17:30:31.656744    2470 kubeadm.go:1160] stopping kube-system containers ...
	I0803 17:30:31.656787    2470 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0803 17:30:31.667033    2470 docker.go:483] Stopping containers: [ba9904a992bc 08fc28df5209 ac8e5bfeb843 9efbbb232ca1 d80777695601 e2cc7ba91cd3 a44a0bfdc8e3 3dee2fb4c6d0 c5e60d852e92 2fdc2f38ae06 d92e5141f1d7 565f45beb3e2 474f10fc477c 7f189550ebda b0ffac7f2136 37fc4b7cffb0 4af3b84a4905 e09e03966b00 7f91b67171ca 4bb38fc9f199 daaf4245500e 0f0cd7f548f7 9577f3c2ae88 89bf2b837dbb e90aca1ddc4b 4aea2c5dac8b 7fa12de7ac44 a7643e2dc2c7 bbac113eb0e0 d93e25514d15]
	I0803 17:30:31.667093    2470 ssh_runner.go:195] Run: docker stop ba9904a992bc 08fc28df5209 ac8e5bfeb843 9efbbb232ca1 d80777695601 e2cc7ba91cd3 a44a0bfdc8e3 3dee2fb4c6d0 c5e60d852e92 2fdc2f38ae06 d92e5141f1d7 565f45beb3e2 474f10fc477c 7f189550ebda b0ffac7f2136 37fc4b7cffb0 4af3b84a4905 e09e03966b00 7f91b67171ca 4bb38fc9f199 daaf4245500e 0f0cd7f548f7 9577f3c2ae88 89bf2b837dbb e90aca1ddc4b 4aea2c5dac8b 7fa12de7ac44 a7643e2dc2c7 bbac113eb0e0 d93e25514d15
	I0803 17:30:31.682581    2470 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0803 17:30:31.778100    2470 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0803 17:30:31.783272    2470 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5647 Aug  4 00:28 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5657 Aug  4 00:29 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Aug  4 00:28 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Aug  4 00:29 /etc/kubernetes/scheduler.conf
	
	I0803 17:30:31.783302    2470 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0803 17:30:31.787663    2470 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0803 17:30:31.791946    2470 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0803 17:30:31.796211    2470 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0803 17:30:31.796234    2470 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0803 17:30:31.800122    2470 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0803 17:30:31.803818    2470 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0803 17:30:31.803841    2470 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0803 17:30:31.807144    2470 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0803 17:30:31.810367    2470 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0803 17:30:31.829637    2470 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0803 17:30:32.768494    2470 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0803 17:30:32.876156    2470 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0803 17:30:32.907305    2470 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0803 17:30:32.936254    2470 api_server.go:52] waiting for apiserver process to appear ...
	I0803 17:30:32.936310    2470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 17:30:33.438372    2470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 17:30:33.938340    2470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 17:30:33.943371    2470 api_server.go:72] duration metric: took 1.00714125s to wait for apiserver process to appear ...
	I0803 17:30:33.943377    2470 api_server.go:88] waiting for apiserver healthz status ...
	I0803 17:30:33.943385    2470 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0803 17:30:36.134302    2470 api_server.go:279] https://192.168.105.4:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0803 17:30:36.134311    2470 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0803 17:30:36.134317    2470 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0803 17:30:36.171001    2470 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0803 17:30:36.171010    2470 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0803 17:30:36.445408    2470 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0803 17:30:36.448346    2470 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0803 17:30:36.448356    2470 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0803 17:30:36.945377    2470 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0803 17:30:36.948067    2470 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0803 17:30:36.948076    2470 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0803 17:30:37.445361    2470 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0803 17:30:37.447945    2470 api_server.go:279] https://192.168.105.4:8441/healthz returned 200:
	ok
	I0803 17:30:37.451502    2470 api_server.go:141] control plane version: v1.30.3
	I0803 17:30:37.451507    2470 api_server.go:131] duration metric: took 3.50820825s to wait for apiserver health ...
	I0803 17:30:37.451511    2470 cni.go:84] Creating CNI manager for ""
	I0803 17:30:37.451516    2470 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0803 17:30:37.456115    2470 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0803 17:30:37.460049    2470 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0803 17:30:37.463892    2470 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0803 17:30:37.469368    2470 system_pods.go:43] waiting for kube-system pods to appear ...
	I0803 17:30:37.474052    2470 system_pods.go:59] 7 kube-system pods found
	I0803 17:30:37.474059    2470 system_pods.go:61] "coredns-7db6d8ff4d-fdc5c" [0df09d02-667d-435c-a404-469aa750208f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0803 17:30:37.474062    2470 system_pods.go:61] "etcd-functional-959000" [fcf6563c-a4dd-4209-b900-f56e3c7d035a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0803 17:30:37.474065    2470 system_pods.go:61] "kube-apiserver-functional-959000" [95557562-0fd4-4fb5-bf6a-7f075bc79df9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0803 17:30:37.474067    2470 system_pods.go:61] "kube-controller-manager-functional-959000" [b93a1b40-2603-4852-94d0-fed9f683f2c5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0803 17:30:37.474069    2470 system_pods.go:61] "kube-proxy-5gj64" [a787e84b-7f25-4275-9eac-b56a1e0638e8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0803 17:30:37.474071    2470 system_pods.go:61] "kube-scheduler-functional-959000" [2881b4cf-5cd4-4dc3-a1a3-7164a241788a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0803 17:30:37.474073    2470 system_pods.go:61] "storage-provisioner" [ce999f5e-bd8b-4112-b809-49fab632a548] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0803 17:30:37.474075    2470 system_pods.go:74] duration metric: took 4.704667ms to wait for pod list to return data ...
	I0803 17:30:37.474078    2470 node_conditions.go:102] verifying NodePressure condition ...
	I0803 17:30:37.475435    2470 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0803 17:30:37.475440    2470 node_conditions.go:123] node cpu capacity is 2
	I0803 17:30:37.475444    2470 node_conditions.go:105] duration metric: took 1.364458ms to run NodePressure ...
	I0803 17:30:37.475452    2470 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0803 17:30:37.701272    2470 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0803 17:30:37.703556    2470 kubeadm.go:739] kubelet initialised
	I0803 17:30:37.703560    2470 kubeadm.go:740] duration metric: took 2.280625ms waiting for restarted kubelet to initialise ...
	I0803 17:30:37.703563    2470 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0803 17:30:37.707171    2470 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-fdc5c" in "kube-system" namespace to be "Ready" ...
	I0803 17:30:39.710125    2470 pod_ready.go:102] pod "coredns-7db6d8ff4d-fdc5c" in "kube-system" namespace has status "Ready":"False"
	I0803 17:30:41.710698    2470 pod_ready.go:102] pod "coredns-7db6d8ff4d-fdc5c" in "kube-system" namespace has status "Ready":"False"
	I0803 17:30:43.712061    2470 pod_ready.go:102] pod "coredns-7db6d8ff4d-fdc5c" in "kube-system" namespace has status "Ready":"False"
	I0803 17:30:46.210413    2470 pod_ready.go:102] pod "coredns-7db6d8ff4d-fdc5c" in "kube-system" namespace has status "Ready":"False"
	I0803 17:30:48.212010    2470 pod_ready.go:102] pod "coredns-7db6d8ff4d-fdc5c" in "kube-system" namespace has status "Ready":"False"
	I0803 17:30:50.712057    2470 pod_ready.go:102] pod "coredns-7db6d8ff4d-fdc5c" in "kube-system" namespace has status "Ready":"False"
	I0803 17:30:53.211747    2470 pod_ready.go:102] pod "coredns-7db6d8ff4d-fdc5c" in "kube-system" namespace has status "Ready":"False"
	I0803 17:30:55.212088    2470 pod_ready.go:102] pod "coredns-7db6d8ff4d-fdc5c" in "kube-system" namespace has status "Ready":"False"
	I0803 17:30:57.711766    2470 pod_ready.go:102] pod "coredns-7db6d8ff4d-fdc5c" in "kube-system" namespace has status "Ready":"False"
	I0803 17:31:00.211396    2470 pod_ready.go:102] pod "coredns-7db6d8ff4d-fdc5c" in "kube-system" namespace has status "Ready":"False"
	I0803 17:31:02.711385    2470 pod_ready.go:102] pod "coredns-7db6d8ff4d-fdc5c" in "kube-system" namespace has status "Ready":"False"
	I0803 17:31:05.212055    2470 pod_ready.go:102] pod "coredns-7db6d8ff4d-fdc5c" in "kube-system" namespace has status "Ready":"False"
	I0803 17:31:07.710006    2470 pod_ready.go:102] pod "coredns-7db6d8ff4d-fdc5c" in "kube-system" namespace has status "Ready":"False"
	I0803 17:31:09.711710    2470 pod_ready.go:102] pod "coredns-7db6d8ff4d-fdc5c" in "kube-system" namespace has status "Ready":"False"
	I0803 17:31:12.211495    2470 pod_ready.go:102] pod "coredns-7db6d8ff4d-fdc5c" in "kube-system" namespace has status "Ready":"False"
	I0803 17:31:13.211230    2470 pod_ready.go:92] pod "coredns-7db6d8ff4d-fdc5c" in "kube-system" namespace has status "Ready":"True"
	I0803 17:31:13.211235    2470 pod_ready.go:81] duration metric: took 35.504872666s for pod "coredns-7db6d8ff4d-fdc5c" in "kube-system" namespace to be "Ready" ...
	I0803 17:31:13.211239    2470 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-959000" in "kube-system" namespace to be "Ready" ...
	I0803 17:31:13.213395    2470 pod_ready.go:92] pod "etcd-functional-959000" in "kube-system" namespace has status "Ready":"True"
	I0803 17:31:13.213397    2470 pod_ready.go:81] duration metric: took 2.155791ms for pod "etcd-functional-959000" in "kube-system" namespace to be "Ready" ...
	I0803 17:31:13.213400    2470 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-959000" in "kube-system" namespace to be "Ready" ...
	I0803 17:31:13.215266    2470 pod_ready.go:92] pod "kube-apiserver-functional-959000" in "kube-system" namespace has status "Ready":"True"
	I0803 17:31:13.215269    2470 pod_ready.go:81] duration metric: took 1.866417ms for pod "kube-apiserver-functional-959000" in "kube-system" namespace to be "Ready" ...
	I0803 17:31:13.215272    2470 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-959000" in "kube-system" namespace to be "Ready" ...
	I0803 17:31:13.217071    2470 pod_ready.go:92] pod "kube-controller-manager-functional-959000" in "kube-system" namespace has status "Ready":"True"
	I0803 17:31:13.217074    2470 pod_ready.go:81] duration metric: took 1.799375ms for pod "kube-controller-manager-functional-959000" in "kube-system" namespace to be "Ready" ...
	I0803 17:31:13.217076    2470 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-5gj64" in "kube-system" namespace to be "Ready" ...
	I0803 17:31:13.218904    2470 pod_ready.go:92] pod "kube-proxy-5gj64" in "kube-system" namespace has status "Ready":"True"
	I0803 17:31:13.218906    2470 pod_ready.go:81] duration metric: took 1.828291ms for pod "kube-proxy-5gj64" in "kube-system" namespace to be "Ready" ...
	I0803 17:31:13.218909    2470 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-959000" in "kube-system" namespace to be "Ready" ...
	I0803 17:31:13.611688    2470 pod_ready.go:92] pod "kube-scheduler-functional-959000" in "kube-system" namespace has status "Ready":"True"
	I0803 17:31:13.611693    2470 pod_ready.go:81] duration metric: took 392.791125ms for pod "kube-scheduler-functional-959000" in "kube-system" namespace to be "Ready" ...
	I0803 17:31:13.611697    2470 pod_ready.go:38] duration metric: took 35.908955583s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0803 17:31:13.611710    2470 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0803 17:31:13.616650    2470 ops.go:34] apiserver oom_adj: -16
	I0803 17:31:13.616655    2470 kubeadm.go:597] duration metric: took 41.969096417s to restartPrimaryControlPlane
	I0803 17:31:13.616658    2470 kubeadm.go:394] duration metric: took 41.979180208s to StartCluster
	I0803 17:31:13.616666    2470 settings.go:142] acquiring lock: {Name:mkc455f89a0a1d96857baea22a1ca4141ab02c5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 17:31:13.616760    2470 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19364-1166/kubeconfig
	I0803 17:31:13.617098    2470 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19364-1166/kubeconfig: {Name:mk0a3c55e1982b2d92db1034b47f8334d27942c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 17:31:13.617341    2470 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 17:31:13.617376    2470 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0803 17:31:13.617412    2470 addons.go:69] Setting storage-provisioner=true in profile "functional-959000"
	I0803 17:31:13.617427    2470 addons.go:234] Setting addon storage-provisioner=true in "functional-959000"
	W0803 17:31:13.617431    2470 addons.go:243] addon storage-provisioner should already be in state true
	I0803 17:31:13.617440    2470 config.go:182] Loaded profile config "functional-959000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 17:31:13.617444    2470 host.go:66] Checking if "functional-959000" exists ...
	I0803 17:31:13.617467    2470 addons.go:69] Setting default-storageclass=true in profile "functional-959000"
	I0803 17:31:13.617482    2470 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-959000"
	I0803 17:31:13.618452    2470 addons.go:234] Setting addon default-storageclass=true in "functional-959000"
	W0803 17:31:13.618454    2470 addons.go:243] addon default-storageclass should already be in state true
	I0803 17:31:13.618460    2470 host.go:66] Checking if "functional-959000" exists ...
	I0803 17:31:13.621719    2470 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0803 17:31:13.621844    2470 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0803 17:31:13.621852    2470 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/functional-959000/id_rsa Username:docker}
	I0803 17:31:13.624356    2470 out.go:177] * Verifying Kubernetes components...
	I0803 17:31:13.628354    2470 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0803 17:31:13.632376    2470 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 17:31:13.635379    2470 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0803 17:31:13.635382    2470 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0803 17:31:13.635387    2470 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/functional-959000/id_rsa Username:docker}
	I0803 17:31:13.730166    2470 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0803 17:31:13.736236    2470 node_ready.go:35] waiting up to 6m0s for node "functional-959000" to be "Ready" ...
	I0803 17:31:13.740455    2470 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0803 17:31:13.771116    2470 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0803 17:31:13.811989    2470 node_ready.go:49] node "functional-959000" has status "Ready":"True"
	I0803 17:31:13.811998    2470 node_ready.go:38] duration metric: took 75.752167ms for node "functional-959000" to be "Ready" ...
	I0803 17:31:13.812002    2470 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0803 17:31:14.014295    2470 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fdc5c" in "kube-system" namespace to be "Ready" ...
	I0803 17:31:14.072098    2470 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0803 17:31:14.078393    2470 addons.go:510] duration metric: took 461.03825ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0803 17:31:14.411816    2470 pod_ready.go:92] pod "coredns-7db6d8ff4d-fdc5c" in "kube-system" namespace has status "Ready":"True"
	I0803 17:31:14.411822    2470 pod_ready.go:81] duration metric: took 397.530125ms for pod "coredns-7db6d8ff4d-fdc5c" in "kube-system" namespace to be "Ready" ...
	I0803 17:31:14.411826    2470 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-959000" in "kube-system" namespace to be "Ready" ...
	I0803 17:31:14.812454    2470 pod_ready.go:92] pod "etcd-functional-959000" in "kube-system" namespace has status "Ready":"True"
	I0803 17:31:14.812460    2470 pod_ready.go:81] duration metric: took 400.641125ms for pod "etcd-functional-959000" in "kube-system" namespace to be "Ready" ...
	I0803 17:31:14.812464    2470 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-959000" in "kube-system" namespace to be "Ready" ...
	I0803 17:31:15.212049    2470 pod_ready.go:92] pod "kube-apiserver-functional-959000" in "kube-system" namespace has status "Ready":"True"
	I0803 17:31:15.212054    2470 pod_ready.go:81] duration metric: took 399.613458ms for pod "kube-apiserver-functional-959000" in "kube-system" namespace to be "Ready" ...
	I0803 17:31:15.212059    2470 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-959000" in "kube-system" namespace to be "Ready" ...
	I0803 17:31:15.612110    2470 pod_ready.go:92] pod "kube-controller-manager-functional-959000" in "kube-system" namespace has status "Ready":"True"
	I0803 17:31:15.612117    2470 pod_ready.go:81] duration metric: took 400.111625ms for pod "kube-controller-manager-functional-959000" in "kube-system" namespace to be "Ready" ...
	I0803 17:31:15.612121    2470 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5gj64" in "kube-system" namespace to be "Ready" ...
	I0803 17:31:16.012125    2470 pod_ready.go:92] pod "kube-proxy-5gj64" in "kube-system" namespace has status "Ready":"True"
	I0803 17:31:16.012132    2470 pod_ready.go:81] duration metric: took 400.064209ms for pod "kube-proxy-5gj64" in "kube-system" namespace to be "Ready" ...
	I0803 17:31:16.012136    2470 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-959000" in "kube-system" namespace to be "Ready" ...
	I0803 17:31:16.411448    2470 pod_ready.go:92] pod "kube-scheduler-functional-959000" in "kube-system" namespace has status "Ready":"True"
	I0803 17:31:16.411453    2470 pod_ready.go:81] duration metric: took 399.368417ms for pod "kube-scheduler-functional-959000" in "kube-system" namespace to be "Ready" ...
	I0803 17:31:16.411457    2470 pod_ready.go:38] duration metric: took 2.599665958s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0803 17:31:16.411465    2470 api_server.go:52] waiting for apiserver process to appear ...
	I0803 17:31:16.411522    2470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 17:31:16.417219    2470 api_server.go:72] duration metric: took 2.800081625s to wait for apiserver process to appear ...
	I0803 17:31:16.417225    2470 api_server.go:88] waiting for apiserver healthz status ...
	I0803 17:31:16.417232    2470 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0803 17:31:16.419680    2470 api_server.go:279] https://192.168.105.4:8441/healthz returned 200:
	ok
	I0803 17:31:16.420109    2470 api_server.go:141] control plane version: v1.30.3
	I0803 17:31:16.420112    2470 api_server.go:131] duration metric: took 2.886334ms to wait for apiserver health ...
	I0803 17:31:16.420115    2470 system_pods.go:43] waiting for kube-system pods to appear ...
	I0803 17:31:16.613336    2470 system_pods.go:59] 7 kube-system pods found
	I0803 17:31:16.613342    2470 system_pods.go:61] "coredns-7db6d8ff4d-fdc5c" [0df09d02-667d-435c-a404-469aa750208f] Running
	I0803 17:31:16.613344    2470 system_pods.go:61] "etcd-functional-959000" [fcf6563c-a4dd-4209-b900-f56e3c7d035a] Running
	I0803 17:31:16.613346    2470 system_pods.go:61] "kube-apiserver-functional-959000" [95557562-0fd4-4fb5-bf6a-7f075bc79df9] Running
	I0803 17:31:16.613348    2470 system_pods.go:61] "kube-controller-manager-functional-959000" [b93a1b40-2603-4852-94d0-fed9f683f2c5] Running
	I0803 17:31:16.613349    2470 system_pods.go:61] "kube-proxy-5gj64" [a787e84b-7f25-4275-9eac-b56a1e0638e8] Running
	I0803 17:31:16.613350    2470 system_pods.go:61] "kube-scheduler-functional-959000" [2881b4cf-5cd4-4dc3-a1a3-7164a241788a] Running
	I0803 17:31:16.613356    2470 system_pods.go:61] "storage-provisioner" [ce999f5e-bd8b-4112-b809-49fab632a548] Running
	I0803 17:31:16.613359    2470 system_pods.go:74] duration metric: took 193.267208ms to wait for pod list to return data ...
	I0803 17:31:16.613361    2470 default_sa.go:34] waiting for default service account to be created ...
	I0803 17:31:16.811570    2470 default_sa.go:45] found service account: "default"
	I0803 17:31:16.811577    2470 default_sa.go:55] duration metric: took 198.239042ms for default service account to be created ...
	I0803 17:31:16.811579    2470 system_pods.go:116] waiting for k8s-apps to be running ...
	I0803 17:31:17.013505    2470 system_pods.go:86] 7 kube-system pods found
	I0803 17:31:17.013526    2470 system_pods.go:89] "coredns-7db6d8ff4d-fdc5c" [0df09d02-667d-435c-a404-469aa750208f] Running
	I0803 17:31:17.013539    2470 system_pods.go:89] "etcd-functional-959000" [fcf6563c-a4dd-4209-b900-f56e3c7d035a] Running
	I0803 17:31:17.013543    2470 system_pods.go:89] "kube-apiserver-functional-959000" [95557562-0fd4-4fb5-bf6a-7f075bc79df9] Running
	I0803 17:31:17.013547    2470 system_pods.go:89] "kube-controller-manager-functional-959000" [b93a1b40-2603-4852-94d0-fed9f683f2c5] Running
	I0803 17:31:17.013551    2470 system_pods.go:89] "kube-proxy-5gj64" [a787e84b-7f25-4275-9eac-b56a1e0638e8] Running
	I0803 17:31:17.013555    2470 system_pods.go:89] "kube-scheduler-functional-959000" [2881b4cf-5cd4-4dc3-a1a3-7164a241788a] Running
	I0803 17:31:17.013563    2470 system_pods.go:89] "storage-provisioner" [ce999f5e-bd8b-4112-b809-49fab632a548] Running
	I0803 17:31:17.013570    2470 system_pods.go:126] duration metric: took 202.013459ms to wait for k8s-apps to be running ...
	I0803 17:31:17.013576    2470 system_svc.go:44] waiting for kubelet service to be running ....
	I0803 17:31:17.013693    2470 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 17:31:17.019982    2470 system_svc.go:56] duration metric: took 6.405084ms WaitForService to wait for kubelet
	I0803 17:31:17.019991    2470 kubeadm.go:582] duration metric: took 3.402943416s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0803 17:31:17.020002    2470 node_conditions.go:102] verifying NodePressure condition ...
	I0803 17:31:17.211719    2470 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0803 17:31:17.211724    2470 node_conditions.go:123] node cpu capacity is 2
	I0803 17:31:17.211729    2470 node_conditions.go:105] duration metric: took 191.750084ms to run NodePressure ...
	I0803 17:31:17.211734    2470 start.go:241] waiting for startup goroutines ...
	I0803 17:31:17.211738    2470 start.go:246] waiting for cluster config update ...
	I0803 17:31:17.211742    2470 start.go:255] writing updated cluster config ...
	I0803 17:31:17.211971    2470 ssh_runner.go:195] Run: rm -f paused
	I0803 17:31:17.242858    2470 start.go:600] kubectl: 1.29.2, cluster: 1.30.3 (minor skew: 1)
	I0803 17:31:17.246588    2470 out.go:177] * Done! kubectl is now configured to use "functional-959000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Aug 04 00:31:56 functional-959000 dockerd[6180]: time="2024-08-04T00:31:56.996870450Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 04 00:31:56 functional-959000 dockerd[6180]: time="2024-08-04T00:31:56.996876158Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 04 00:31:56 functional-959000 dockerd[6180]: time="2024-08-04T00:31:56.996903033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 04 00:31:57 functional-959000 dockerd[6173]: time="2024-08-04T00:31:57.018218924Z" level=info msg="ignoring event" container=6e12ab0f9d1687f541a7ba55d5846acedfaaa4e9caafb39580fc8e10df9272a7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 00:31:57 functional-959000 dockerd[6180]: time="2024-08-04T00:31:57.018679419Z" level=info msg="shim disconnected" id=6e12ab0f9d1687f541a7ba55d5846acedfaaa4e9caafb39580fc8e10df9272a7 namespace=moby
	Aug 04 00:31:57 functional-959000 dockerd[6180]: time="2024-08-04T00:31:57.018714752Z" level=warning msg="cleaning up after shim disconnected" id=6e12ab0f9d1687f541a7ba55d5846acedfaaa4e9caafb39580fc8e10df9272a7 namespace=moby
	Aug 04 00:31:57 functional-959000 dockerd[6180]: time="2024-08-04T00:31:57.018732752Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 04 00:32:03 functional-959000 dockerd[6180]: time="2024-08-04T00:32:03.447072879Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 04 00:32:03 functional-959000 dockerd[6180]: time="2024-08-04T00:32:03.447111170Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 04 00:32:03 functional-959000 dockerd[6180]: time="2024-08-04T00:32:03.447120795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 04 00:32:03 functional-959000 dockerd[6180]: time="2024-08-04T00:32:03.447154878Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 04 00:32:03 functional-959000 cri-dockerd[6513]: time="2024-08-04T00:32:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9334ac932867d195feed873570922b22dacc1ad1bf8023790bb6825c19fb82cb/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Aug 04 00:32:04 functional-959000 cri-dockerd[6513]: time="2024-08-04T00:32:04Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	Aug 04 00:32:04 functional-959000 dockerd[6180]: time="2024-08-04T00:32:04.789058979Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 04 00:32:04 functional-959000 dockerd[6180]: time="2024-08-04T00:32:04.789123521Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 04 00:32:04 functional-959000 dockerd[6180]: time="2024-08-04T00:32:04.789131687Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 04 00:32:04 functional-959000 dockerd[6180]: time="2024-08-04T00:32:04.789161312Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 04 00:32:04 functional-959000 dockerd[6173]: time="2024-08-04T00:32:04.822774446Z" level=info msg="ignoring event" container=8bf146c5becc6495b9063af94be32b8efad5644b1864cf9d349b79e958ef3dfc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 00:32:04 functional-959000 dockerd[6180]: time="2024-08-04T00:32:04.823028361Z" level=info msg="shim disconnected" id=8bf146c5becc6495b9063af94be32b8efad5644b1864cf9d349b79e958ef3dfc namespace=moby
	Aug 04 00:32:04 functional-959000 dockerd[6180]: time="2024-08-04T00:32:04.823075986Z" level=warning msg="cleaning up after shim disconnected" id=8bf146c5becc6495b9063af94be32b8efad5644b1864cf9d349b79e958ef3dfc namespace=moby
	Aug 04 00:32:04 functional-959000 dockerd[6180]: time="2024-08-04T00:32:04.823081319Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 04 00:32:06 functional-959000 dockerd[6173]: time="2024-08-04T00:32:06.586032635Z" level=info msg="ignoring event" container=9334ac932867d195feed873570922b22dacc1ad1bf8023790bb6825c19fb82cb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 04 00:32:06 functional-959000 dockerd[6180]: time="2024-08-04T00:32:06.586257926Z" level=info msg="shim disconnected" id=9334ac932867d195feed873570922b22dacc1ad1bf8023790bb6825c19fb82cb namespace=moby
	Aug 04 00:32:06 functional-959000 dockerd[6180]: time="2024-08-04T00:32:06.586307759Z" level=warning msg="cleaning up after shim disconnected" id=9334ac932867d195feed873570922b22dacc1ad1bf8023790bb6825c19fb82cb namespace=moby
	Aug 04 00:32:06 functional-959000 dockerd[6180]: time="2024-08-04T00:32:06.586313092Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	8bf146c5becc6       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   4 seconds ago        Exited              mount-munger              0                   9334ac932867d       busybox-mount
	6e12ab0f9d168       72565bf5bbedf                                                                                         12 seconds ago       Exited              echoserver-arm            2                   8b7ad4e1775b5       hello-node-connect-6f49f58cd5-kpf5r
	bdc7d4b6657c0       nginx@sha256:6af79ae5de407283dcea8b00d5c37ace95441fd58a8b1d2aa1ed93f5511bb18c                         13 seconds ago       Running             myfrontend                0                   e78de03e2c4f6       sp-pod
	3dad53f463004       72565bf5bbedf                                                                                         26 seconds ago       Exited              echoserver-arm            2                   3856efb7e393e       hello-node-65f5d5cc78-bk9ck
	b44636d80040b       nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                         36 seconds ago       Running             nginx                     0                   f500a58a79dc6       nginx-svc
	bb38b569669e0       ba04bb24b9575                                                                                         About a minute ago   Running             storage-provisioner       4                   7e4f94b7f6138       storage-provisioner
	09ca68c00c0dd       2437cf7621777                                                                                         About a minute ago   Running             coredns                   2                   4d0db0a6a1b8a       coredns-7db6d8ff4d-fdc5c
	c217c400d9357       ba04bb24b9575                                                                                         About a minute ago   Exited              storage-provisioner       3                   7e4f94b7f6138       storage-provisioner
	d93f216aca1e2       2351f570ed0ea                                                                                         About a minute ago   Running             kube-proxy                2                   3fee6bb7559d8       kube-proxy-5gj64
	eae4ed244bf84       014faa467e297                                                                                         About a minute ago   Running             etcd                      2                   6af941785decd       etcd-functional-959000
	a322e1ebc7914       d48f992a22722                                                                                         About a minute ago   Running             kube-scheduler            2                   5cbafbc7d1700       kube-scheduler-functional-959000
	ed7c12a91dd49       8e97cdb19e7cc                                                                                         About a minute ago   Running             kube-controller-manager   2                   0bf0b132ca601       kube-controller-manager-functional-959000
	3300c45a49441       61773190d42ff                                                                                         About a minute ago   Running             kube-apiserver            0                   67652fbba3724       kube-apiserver-functional-959000
	08fc28df52098       2437cf7621777                                                                                         2 minutes ago        Exited              coredns                   1                   d807776956014       coredns-7db6d8ff4d-fdc5c
	9efbbb232ca13       2351f570ed0ea                                                                                         2 minutes ago        Exited              kube-proxy                1                   e2cc7ba91cd38       kube-proxy-5gj64
	c5e60d852e926       014faa467e297                                                                                         2 minutes ago        Exited              etcd                      1                   7f189550ebdae       etcd-functional-959000
	2fdc2f38ae06d       8e97cdb19e7cc                                                                                         2 minutes ago        Exited              kube-controller-manager   1                   565f45beb3e28       kube-controller-manager-functional-959000
	d92e5141f1d7c       d48f992a22722                                                                                         2 minutes ago        Exited              kube-scheduler            1                   474f10fc477cc       kube-scheduler-functional-959000
	
	
	==> coredns [08fc28df5209] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:33495 - 47820 "HINFO IN 6773666571902372626.1784668808177215671. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.009496634s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[453595367]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (04-Aug-2024 00:29:34.860) (total time: 30000ms):
	Trace[453595367]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (00:30:04.861)
	Trace[453595367]: [30.00051497s] [30.00051497s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1022614353]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (04-Aug-2024 00:29:34.860) (total time: 30000ms):
	Trace[1022614353]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (00:30:04.861)
	Trace[1022614353]: [30.000886309s] [30.000886309s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[678881598]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (04-Aug-2024 00:29:34.860) (total time: 30000ms):
	Trace[678881598]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (00:30:04.861)
	Trace[678881598]: [30.000736639s] [30.000736639s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [09ca68c00c0d] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1685599813]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (04-Aug-2024 00:30:37.454) (total time: 30000ms):
	Trace[1685599813]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (00:31:07.455)
	Trace[1685599813]: [30.000823134s] [30.000823134s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[386393226]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (04-Aug-2024 00:30:37.454) (total time: 30000ms):
	Trace[386393226]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (00:31:07.455)
	Trace[386393226]: [30.000857134s] [30.000857134s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1547961104]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (04-Aug-2024 00:30:37.455) (total time: 30000ms):
	Trace[1547961104]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (00:31:07.455)
	Trace[1547961104]: [30.000795843s] [30.000795843s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] 10.244.0.1:38951 - 4392 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000101249s
	[INFO] 10.244.0.1:59124 - 60409 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000095666s
	[INFO] 10.244.0.1:50138 - 27597 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000026709s
	[INFO] 10.244.0.1:25675 - 54170 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.001117117s
	[INFO] 10.244.0.1:9185 - 42010 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000060583s
	[INFO] 10.244.0.1:8580 - 63383 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000091708s
	
	
	==> describe nodes <==
	Name:               functional-959000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-959000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=40edfa2c305b426121bc436e9f593c10662235e6
	                    minikube.k8s.io/name=functional-959000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_03T17_28_51_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 04 Aug 2024 00:28:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-959000
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 04 Aug 2024 00:32:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 04 Aug 2024 00:32:08 +0000   Sun, 04 Aug 2024 00:28:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 04 Aug 2024 00:32:08 +0000   Sun, 04 Aug 2024 00:28:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 04 Aug 2024 00:32:08 +0000   Sun, 04 Aug 2024 00:28:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 04 Aug 2024 00:32:08 +0000   Sun, 04 Aug 2024 00:28:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-959000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	System Info:
	  Machine ID:                 bb05e011a8ff420d8c65236702963838
	  System UUID:                bb05e011a8ff420d8c65236702963838
	  Boot ID:                    40e2681e-c660-4857-9aa9-72353345096e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.1.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-65f5d5cc78-bk9ck                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         46s
	  default                     hello-node-connect-6f49f58cd5-kpf5r          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30s
	  default                     nginx-svc                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         42s
	  default                     sp-pod                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15s
	  kube-system                 coredns-7db6d8ff4d-fdc5c                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     3m3s
	  kube-system                 etcd-functional-959000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         3m19s
	  kube-system                 kube-apiserver-functional-959000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	  kube-system                 kube-controller-manager-functional-959000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m19s
	  kube-system                 kube-proxy-5gj64                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m4s
	  kube-system                 kube-scheduler-functional-959000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m19s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m1s                   kube-proxy       
	  Normal  Starting                 91s                    kube-proxy       
	  Normal  Starting                 2m34s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m23s (x8 over 3m23s)  kubelet          Node functional-959000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m23s (x8 over 3m23s)  kubelet          Node functional-959000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m23s (x7 over 3m23s)  kubelet          Node functional-959000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 3m19s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  3m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3m19s                  kubelet          Node functional-959000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m19s                  kubelet          Node functional-959000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m19s                  kubelet          Node functional-959000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                3m15s                  kubelet          Node functional-959000 status is now: NodeReady
	  Normal  RegisteredNode           3m4s                   node-controller  Node functional-959000 event: Registered Node functional-959000 in Controller
	  Normal  Starting                 2m39s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m39s (x8 over 2m39s)  kubelet          Node functional-959000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m39s (x8 over 2m39s)  kubelet          Node functional-959000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m39s (x7 over 2m39s)  kubelet          Node functional-959000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m39s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m23s                  node-controller  Node functional-959000 event: Registered Node functional-959000 in Controller
	  Normal  Starting                 97s                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  97s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    96s (x8 over 97s)      kubelet          Node functional-959000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     96s (x7 over 97s)      kubelet          Node functional-959000 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  96s (x8 over 97s)      kubelet          Node functional-959000 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           80s                    node-controller  Node functional-959000 event: Registered Node functional-959000 in Controller
	
	
	==> dmesg <==
	[  +1.128967] systemd-fstab-generator[4232]: Ignoring "noauto" option for root device
	[  +4.399814] kauditd_printk_skb: 198 callbacks suppressed
	[ +11.798123] kauditd_printk_skb: 32 callbacks suppressed
	[Aug 4 00:30] systemd-fstab-generator[5266]: Ignoring "noauto" option for root device
	[ +10.740066] systemd-fstab-generator[5692]: Ignoring "noauto" option for root device
	[  +0.055863] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.096917] systemd-fstab-generator[5727]: Ignoring "noauto" option for root device
	[  +0.091253] systemd-fstab-generator[5739]: Ignoring "noauto" option for root device
	[  +0.101150] systemd-fstab-generator[5753]: Ignoring "noauto" option for root device
	[  +5.091482] kauditd_printk_skb: 91 callbacks suppressed
	[  +7.311632] systemd-fstab-generator[6391]: Ignoring "noauto" option for root device
	[  +0.092539] systemd-fstab-generator[6403]: Ignoring "noauto" option for root device
	[  +0.076423] systemd-fstab-generator[6415]: Ignoring "noauto" option for root device
	[  +0.089803] systemd-fstab-generator[6498]: Ignoring "noauto" option for root device
	[  +0.214720] systemd-fstab-generator[6673]: Ignoring "noauto" option for root device
	[  +1.383199] systemd-fstab-generator[6798]: Ignoring "noauto" option for root device
	[  +0.874149] kauditd_printk_skb: 179 callbacks suppressed
	[ +15.264252] kauditd_printk_skb: 52 callbacks suppressed
	[Aug 4 00:31] systemd-fstab-generator[7936]: Ignoring "noauto" option for root device
	[  +5.053278] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.246487] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.868205] kauditd_printk_skb: 20 callbacks suppressed
	[ +10.131833] kauditd_printk_skb: 13 callbacks suppressed
	[  +6.551060] kauditd_printk_skb: 38 callbacks suppressed
	[Aug 4 00:32] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [c5e60d852e92] <==
	{"level":"info","ts":"2024-08-04T00:29:31.210108Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-04T00:29:32.996105Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-04T00:29:32.996276Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-04T00:29:32.996331Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2024-08-04T00:29:32.996411Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2024-08-04T00:29:32.996427Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-08-04T00:29:32.9965Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2024-08-04T00:29:32.996707Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-08-04T00:29:32.999394Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-959000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-04T00:29:32.999469Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-04T00:29:32.999738Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-04T00:29:32.999772Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-04T00:29:32.999804Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-04T00:29:33.004436Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-08-04T00:29:33.00445Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-04T00:30:18.682882Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-04T00:30:18.682912Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"functional-959000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2024-08-04T00:30:18.682952Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-04T00:30:18.682993Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-04T00:30:18.699143Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-04T00:30:18.699163Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-04T00:30:18.700402Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2024-08-04T00:30:18.703251Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-08-04T00:30:18.70331Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-08-04T00:30:18.703314Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"functional-959000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	
	==> etcd [eae4ed244bf8] <==
	{"level":"info","ts":"2024-08-04T00:30:33.955245Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-04T00:30:33.955299Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-08-04T00:30:33.955379Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-04T00:30:33.955398Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-04T00:30:33.955414Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-04T00:30:33.955524Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-08-04T00:30:33.955553Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-08-04T00:30:33.956137Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 switched to configuration voters=(527499358918876438)"}
	{"level":"info","ts":"2024-08-04T00:30:33.956187Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2024-08-04T00:30:33.956232Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-04T00:30:33.956274Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-04T00:30:35.638743Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2024-08-04T00:30:35.638888Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-08-04T00:30:35.638933Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-08-04T00:30:35.638967Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2024-08-04T00:30:35.638983Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-08-04T00:30:35.639007Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2024-08-04T00:30:35.63903Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-08-04T00:30:35.643478Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-959000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-04T00:30:35.643474Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-04T00:30:35.643595Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-04T00:30:35.644662Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-04T00:30:35.644699Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-04T00:30:35.648396Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-04T00:30:35.648404Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	
	
	==> kernel <==
	 00:32:09 up 3 min,  0 users,  load average: 0.42, 0.34, 0.15
	Linux functional-959000 5.10.207 #1 SMP PREEMPT Mon Jul 29 12:07:32 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [3300c45a4944] <==
	I0804 00:30:36.249441       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0804 00:30:36.249503       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0804 00:30:36.249773       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0804 00:30:36.250107       1 shared_informer.go:320] Caches are synced for configmaps
	I0804 00:30:36.250821       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0804 00:30:36.250882       1 aggregator.go:165] initial CRD sync complete...
	I0804 00:30:36.250902       1 autoregister_controller.go:141] Starting autoregister controller
	I0804 00:30:36.250935       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0804 00:30:36.250956       1 cache.go:39] Caches are synced for autoregister controller
	I0804 00:30:36.251962       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0804 00:30:36.272606       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0804 00:30:37.153133       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0804 00:30:37.261685       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.105.4]
	I0804 00:30:37.262346       1 controller.go:615] quota admission added evaluator for: endpoints
	I0804 00:30:37.264702       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0804 00:30:37.543667       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0804 00:30:37.547533       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0804 00:30:37.558648       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0804 00:30:37.566877       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0804 00:30:37.569175       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0804 00:31:18.747764       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.97.111.81"}
	I0804 00:31:23.631158       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0804 00:31:23.675004       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.110.127.10"}
	I0804 00:31:27.590038       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.109.105.61"}
	I0804 00:31:39.994530       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.102.86.142"}
	
	
	==> kube-controller-manager [2fdc2f38ae06] <==
	I0804 00:29:46.287255       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0804 00:29:46.288689       1 shared_informer.go:320] Caches are synced for endpoint
	I0804 00:29:46.291528       1 shared_informer.go:320] Caches are synced for PVC protection
	I0804 00:29:46.293563       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="6.494616ms"
	I0804 00:29:46.293601       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="24.458µs"
	I0804 00:29:46.312963       1 shared_informer.go:320] Caches are synced for stateful set
	I0804 00:29:46.319322       1 shared_informer.go:320] Caches are synced for daemon sets
	I0804 00:29:46.320258       1 shared_informer.go:320] Caches are synced for attach detach
	I0804 00:29:46.321528       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0804 00:29:46.321601       1 shared_informer.go:320] Caches are synced for GC
	I0804 00:29:46.322317       1 shared_informer.go:320] Caches are synced for deployment
	I0804 00:29:46.325018       1 shared_informer.go:320] Caches are synced for HPA
	I0804 00:29:46.363006       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0804 00:29:46.364101       1 shared_informer.go:320] Caches are synced for ephemeral
	I0804 00:29:46.364156       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0804 00:29:46.366433       1 shared_informer.go:320] Caches are synced for job
	I0804 00:29:46.367533       1 shared_informer.go:320] Caches are synced for disruption
	I0804 00:29:46.369669       1 shared_informer.go:320] Caches are synced for persistent volume
	I0804 00:29:46.416069       1 shared_informer.go:320] Caches are synced for resource quota
	I0804 00:29:46.427883       1 shared_informer.go:320] Caches are synced for resource quota
	I0804 00:29:46.831327       1 shared_informer.go:320] Caches are synced for garbage collector
	I0804 00:29:46.863436       1 shared_informer.go:320] Caches are synced for garbage collector
	I0804 00:29:46.863451       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0804 00:30:06.855351       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="3.63864ms"
	I0804 00:30:06.855542       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="34.833µs"
	
	
	==> kube-controller-manager [ed7c12a91dd4] <==
	I0804 00:30:49.667719       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0804 00:30:49.673194       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0804 00:30:50.041034       1 shared_informer.go:320] Caches are synced for garbage collector
	I0804 00:30:50.087130       1 shared_informer.go:320] Caches are synced for garbage collector
	I0804 00:30:50.087180       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0804 00:31:13.106941       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="3.825034ms"
	I0804 00:31:13.106965       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="12.833µs"
	I0804 00:31:23.643547       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="11.128121ms"
	I0804 00:31:23.648978       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="5.204645ms"
	I0804 00:31:23.649135       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="24.583µs"
	I0804 00:31:23.649158       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="5.5µs"
	I0804 00:31:23.650076       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="11.791µs"
	I0804 00:31:30.297428       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="17.625µs"
	I0804 00:31:31.301762       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="24.25µs"
	I0804 00:31:32.309592       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="22.458µs"
	I0804 00:31:39.960771       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="7.627534ms"
	I0804 00:31:39.971859       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="10.954554ms"
	I0804 00:31:39.975135       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="3.244229ms"
	I0804 00:31:39.975229       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="76.166µs"
	I0804 00:31:41.364898       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="113.166µs"
	I0804 00:31:42.370771       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="21.916µs"
	I0804 00:31:43.384116       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="24.333µs"
	I0804 00:31:43.392499       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="23.292µs"
	I0804 00:31:56.977470       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="22.666µs"
	I0804 00:31:57.466279       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="22.583µs"
	
	
	==> kube-proxy [9efbbb232ca1] <==
	I0804 00:29:34.773067       1 server_linux.go:69] "Using iptables proxy"
	I0804 00:29:34.780970       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	I0804 00:29:34.804464       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0804 00:29:34.804487       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0804 00:29:34.804507       1 server_linux.go:165] "Using iptables Proxier"
	I0804 00:29:34.805703       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0804 00:29:34.805765       1 server.go:872] "Version info" version="v1.30.3"
	I0804 00:29:34.805770       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0804 00:29:34.806338       1 config.go:192] "Starting service config controller"
	I0804 00:29:34.806343       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0804 00:29:34.806354       1 config.go:101] "Starting endpoint slice config controller"
	I0804 00:29:34.806355       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0804 00:29:34.806489       1 config.go:319] "Starting node config controller"
	I0804 00:29:34.806491       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0804 00:29:34.907143       1 shared_informer.go:320] Caches are synced for service config
	I0804 00:29:34.907150       1 shared_informer.go:320] Caches are synced for node config
	I0804 00:29:34.907160       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [d93f216aca1e] <==
	I0804 00:30:37.460542       1 server_linux.go:69] "Using iptables proxy"
	I0804 00:30:37.464115       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	I0804 00:30:37.472286       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0804 00:30:37.472301       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0804 00:30:37.472308       1 server_linux.go:165] "Using iptables Proxier"
	I0804 00:30:37.472874       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0804 00:30:37.472935       1 server.go:872] "Version info" version="v1.30.3"
	I0804 00:30:37.472951       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0804 00:30:37.473320       1 config.go:192] "Starting service config controller"
	I0804 00:30:37.473331       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0804 00:30:37.473340       1 config.go:101] "Starting endpoint slice config controller"
	I0804 00:30:37.473342       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0804 00:30:37.473505       1 config.go:319] "Starting node config controller"
	I0804 00:30:37.473512       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0804 00:30:37.574006       1 shared_informer.go:320] Caches are synced for node config
	I0804 00:30:37.574052       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0804 00:30:37.574031       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [a322e1ebc791] <==
	I0804 00:30:34.382911       1 serving.go:380] Generated self-signed cert in-memory
	W0804 00:30:36.169232       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0804 00:30:36.169251       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0804 00:30:36.169256       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0804 00:30:36.169258       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0804 00:30:36.201845       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0804 00:30:36.201857       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0804 00:30:36.202771       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0804 00:30:36.202944       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0804 00:30:36.202953       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0804 00:30:36.202965       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0804 00:30:36.303862       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [d92e5141f1d7] <==
	E0804 00:29:33.589601       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0804 00:29:33.589619       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0804 00:29:33.589644       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0804 00:29:33.589658       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0804 00:29:33.589665       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0804 00:29:33.589717       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0804 00:29:33.589725       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0804 00:29:33.589765       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0804 00:29:33.589773       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0804 00:29:33.589787       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0804 00:29:33.589791       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0804 00:29:33.589810       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0804 00:29:33.589847       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0804 00:29:33.589864       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0804 00:29:33.589868       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0804 00:29:33.589882       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0804 00:29:33.589892       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0804 00:29:33.589904       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0804 00:29:33.589911       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0804 00:29:33.589935       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0804 00:29:33.589942       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0804 00:29:33.589986       1 reflector.go:547] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0804 00:29:33.590011       1 reflector.go:150] runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0804 00:29:34.788759       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0804 00:30:18.690504       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 04 00:31:54 functional-959000 kubelet[6805]: I0804 00:31:54.446533    6805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"d12ca6d9d38a2573fe56ac5d6a6bec2ebee675ec04dd4ff959620671a2d5d81b"} err="failed to get container status \"d12ca6d9d38a2573fe56ac5d6a6bec2ebee675ec04dd4ff959620671a2d5d81b\": rpc error: code = Unknown desc = Error response from daemon: No such container: d12ca6d9d38a2573fe56ac5d6a6bec2ebee675ec04dd4ff959620671a2d5d81b"
	Aug 04 00:31:54 functional-959000 kubelet[6805]: I0804 00:31:54.511787    6805 topology_manager.go:215] "Topology Admit Handler" podUID="ea704d57-3768-4165-a9c6-d47eab3b4c6b" podNamespace="default" podName="sp-pod"
	Aug 04 00:31:54 functional-959000 kubelet[6805]: E0804 00:31:54.511828    6805 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1d7fe8c7-78dd-451d-a1e9-496fdc7aabf5" containerName="myfrontend"
	Aug 04 00:31:54 functional-959000 kubelet[6805]: I0804 00:31:54.511847    6805 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d7fe8c7-78dd-451d-a1e9-496fdc7aabf5" containerName="myfrontend"
	Aug 04 00:31:54 functional-959000 kubelet[6805]: I0804 00:31:54.549395    6805 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bk6v5\" (UniqueName: \"kubernetes.io/projected/ea704d57-3768-4165-a9c6-d47eab3b4c6b-kube-api-access-bk6v5\") pod \"sp-pod\" (UID: \"ea704d57-3768-4165-a9c6-d47eab3b4c6b\") " pod="default/sp-pod"
	Aug 04 00:31:54 functional-959000 kubelet[6805]: I0804 00:31:54.549485    6805 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-422acbd9-f614-4ea1-916d-14547c5353ac\" (UniqueName: \"kubernetes.io/host-path/ea704d57-3768-4165-a9c6-d47eab3b4c6b-pvc-422acbd9-f614-4ea1-916d-14547c5353ac\") pod \"sp-pod\" (UID: \"ea704d57-3768-4165-a9c6-d47eab3b4c6b\") " pod="default/sp-pod"
	Aug 04 00:31:54 functional-959000 kubelet[6805]: I0804 00:31:54.960411    6805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d7fe8c7-78dd-451d-a1e9-496fdc7aabf5" path="/var/lib/kubelet/pods/1d7fe8c7-78dd-451d-a1e9-496fdc7aabf5/volumes"
	Aug 04 00:31:56 functional-959000 kubelet[6805]: I0804 00:31:56.958282    6805 scope.go:117] "RemoveContainer" containerID="3dad53f463004ebaff0e1ecef0e0b7c8a4e776e60fc0b38d5ab409b3f82cb024"
	Aug 04 00:31:56 functional-959000 kubelet[6805]: I0804 00:31:56.958349    6805 scope.go:117] "RemoveContainer" containerID="db0b8f788d7425bf797cee29fc1a41aa7400408ba64c24e8ba45fb89eca5cd38"
	Aug 04 00:31:56 functional-959000 kubelet[6805]: E0804 00:31:56.958397    6805 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-65f5d5cc78-bk9ck_default(7c5e22a8-e526-4ff8-abc0-d6e3fa30d98c)\"" pod="default/hello-node-65f5d5cc78-bk9ck" podUID="7c5e22a8-e526-4ff8-abc0-d6e3fa30d98c"
	Aug 04 00:31:56 functional-959000 kubelet[6805]: I0804 00:31:56.981449    6805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/sp-pod" podStartSLOduration=2.281637014 podStartE2EDuration="2.981439369s" podCreationTimestamp="2024-08-04 00:31:54 +0000 UTC" firstStartedPulling="2024-08-04 00:31:54.901506505 +0000 UTC m=+81.999668002" lastFinishedPulling="2024-08-04 00:31:55.601308818 +0000 UTC m=+82.699470357" observedRunningTime="2024-08-04 00:31:56.454607819 +0000 UTC m=+83.552769357" watchObservedRunningTime="2024-08-04 00:31:56.981439369 +0000 UTC m=+84.079600908"
	Aug 04 00:31:57 functional-959000 kubelet[6805]: I0804 00:31:57.460388    6805 scope.go:117] "RemoveContainer" containerID="db0b8f788d7425bf797cee29fc1a41aa7400408ba64c24e8ba45fb89eca5cd38"
	Aug 04 00:31:57 functional-959000 kubelet[6805]: I0804 00:31:57.460513    6805 scope.go:117] "RemoveContainer" containerID="6e12ab0f9d1687f541a7ba55d5846acedfaaa4e9caafb39580fc8e10df9272a7"
	Aug 04 00:31:57 functional-959000 kubelet[6805]: E0804 00:31:57.460602    6805 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-6f49f58cd5-kpf5r_default(7409f7b9-271c-41e7-acc0-ea2ca3c3b5cb)\"" pod="default/hello-node-connect-6f49f58cd5-kpf5r" podUID="7409f7b9-271c-41e7-acc0-ea2ca3c3b5cb"
	Aug 04 00:32:03 functional-959000 kubelet[6805]: I0804 00:32:03.100552    6805 topology_manager.go:215] "Topology Admit Handler" podUID="3efdb5e6-cf76-40d3-9f49-8c47116da252" podNamespace="default" podName="busybox-mount"
	Aug 04 00:32:03 functional-959000 kubelet[6805]: I0804 00:32:03.198324    6805 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skcd6\" (UniqueName: \"kubernetes.io/projected/3efdb5e6-cf76-40d3-9f49-8c47116da252-kube-api-access-skcd6\") pod \"busybox-mount\" (UID: \"3efdb5e6-cf76-40d3-9f49-8c47116da252\") " pod="default/busybox-mount"
	Aug 04 00:32:03 functional-959000 kubelet[6805]: I0804 00:32:03.198366    6805 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/3efdb5e6-cf76-40d3-9f49-8c47116da252-test-volume\") pod \"busybox-mount\" (UID: \"3efdb5e6-cf76-40d3-9f49-8c47116da252\") " pod="default/busybox-mount"
	Aug 04 00:32:03 functional-959000 kubelet[6805]: I0804 00:32:03.493802    6805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9334ac932867d195feed873570922b22dacc1ad1bf8023790bb6825c19fb82cb"
	Aug 04 00:32:06 functional-959000 kubelet[6805]: I0804 00:32:06.718446    6805 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skcd6\" (UniqueName: \"kubernetes.io/projected/3efdb5e6-cf76-40d3-9f49-8c47116da252-kube-api-access-skcd6\") pod \"3efdb5e6-cf76-40d3-9f49-8c47116da252\" (UID: \"3efdb5e6-cf76-40d3-9f49-8c47116da252\") "
	Aug 04 00:32:06 functional-959000 kubelet[6805]: I0804 00:32:06.718465    6805 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/3efdb5e6-cf76-40d3-9f49-8c47116da252-test-volume\") pod \"3efdb5e6-cf76-40d3-9f49-8c47116da252\" (UID: \"3efdb5e6-cf76-40d3-9f49-8c47116da252\") "
	Aug 04 00:32:06 functional-959000 kubelet[6805]: I0804 00:32:06.718501    6805 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3efdb5e6-cf76-40d3-9f49-8c47116da252-test-volume" (OuterVolumeSpecName: "test-volume") pod "3efdb5e6-cf76-40d3-9f49-8c47116da252" (UID: "3efdb5e6-cf76-40d3-9f49-8c47116da252"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Aug 04 00:32:06 functional-959000 kubelet[6805]: I0804 00:32:06.721151    6805 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3efdb5e6-cf76-40d3-9f49-8c47116da252-kube-api-access-skcd6" (OuterVolumeSpecName: "kube-api-access-skcd6") pod "3efdb5e6-cf76-40d3-9f49-8c47116da252" (UID: "3efdb5e6-cf76-40d3-9f49-8c47116da252"). InnerVolumeSpecName "kube-api-access-skcd6". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 04 00:32:06 functional-959000 kubelet[6805]: I0804 00:32:06.818566    6805 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-skcd6\" (UniqueName: \"kubernetes.io/projected/3efdb5e6-cf76-40d3-9f49-8c47116da252-kube-api-access-skcd6\") on node \"functional-959000\" DevicePath \"\""
	Aug 04 00:32:06 functional-959000 kubelet[6805]: I0804 00:32:06.818578    6805 reconciler_common.go:289] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/3efdb5e6-cf76-40d3-9f49-8c47116da252-test-volume\") on node \"functional-959000\" DevicePath \"\""
	Aug 04 00:32:07 functional-959000 kubelet[6805]: I0804 00:32:07.523897    6805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9334ac932867d195feed873570922b22dacc1ad1bf8023790bb6825c19fb82cb"
	
	
	==> storage-provisioner [bb38b569669e] <==
	I0804 00:30:49.007138       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0804 00:30:49.011932       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0804 00:30:49.012257       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0804 00:31:06.401922       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0804 00:31:06.401995       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-959000_fbd4d995-9b61-4e28-b1e7-406989871b74!
	I0804 00:31:06.402163       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"24780bb3-51e2-495d-834a-79010e2fcc3e", APIVersion:"v1", ResourceVersion:"665", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-959000_fbd4d995-9b61-4e28-b1e7-406989871b74 became leader
	I0804 00:31:06.502100       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-959000_fbd4d995-9b61-4e28-b1e7-406989871b74!
	I0804 00:31:41.230247       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0804 00:31:41.230287       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    a8a43187-93d4-4e7b-ab0e-bf29f89813e2 383 0 2024-08-04 00:29:07 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-08-04 00:29:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-422acbd9-f614-4ea1-916d-14547c5353ac &PersistentVolumeClaim{ObjectMeta:{myclaim  default  422acbd9-f614-4ea1-916d-14547c5353ac 803 0 2024-08-04 00:31:41 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-08-04 00:31:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-08-04 00:31:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0804 00:31:41.230597       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-422acbd9-f614-4ea1-916d-14547c5353ac" provisioned
	I0804 00:31:41.230607       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0804 00:31:41.230610       1 volume_store.go:212] Trying to save persistentvolume "pvc-422acbd9-f614-4ea1-916d-14547c5353ac"
	I0804 00:31:41.231064       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"422acbd9-f614-4ea1-916d-14547c5353ac", APIVersion:"v1", ResourceVersion:"803", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0804 00:31:41.238220       1 volume_store.go:219] persistentvolume "pvc-422acbd9-f614-4ea1-916d-14547c5353ac" saved
	I0804 00:31:41.238582       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"422acbd9-f614-4ea1-916d-14547c5353ac", APIVersion:"v1", ResourceVersion:"803", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-422acbd9-f614-4ea1-916d-14547c5353ac
	
	
	==> storage-provisioner [c217c400d935] <==
	I0804 00:30:37.412761       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0804 00:30:37.413446       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-959000 -n functional-959000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-959000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-959000 describe pod busybox-mount
helpers_test.go:282: (dbg) kubectl --context functional-959000 describe pod busybox-mount:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-959000/192.168.105.4
	Start Time:       Sat, 03 Aug 2024 17:32:03 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.12
	IPs:
	  IP:  10.244.0.12
	Containers:
	  mount-munger:
	    Container ID:  docker://8bf146c5becc6495b9063af94be32b8efad5644b1864cf9d349b79e958ef3dfc
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 03 Aug 2024 17:32:04 -0700
	      Finished:     Sat, 03 Aug 2024 17:32:04 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-skcd6 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-skcd6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  6s    default-scheduler  Successfully assigned default/busybox-mount to functional-959000
	  Normal  Pulling    6s    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5s    kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.248s (1.248s including waiting). Image size: 3547125 bytes.
	  Normal  Created    5s    kubelet            Created container mount-munger
	  Normal  Started    5s    kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (29.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (214.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-960000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-darwin-arm64 -p ha-960000 node stop m02 -v=7 --alsologtostderr: (12.191473583s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-960000 status -v=7 --alsologtostderr
E0803 17:37:04.614631    1673 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/functional-959000/client.crt: no such file or directory
E0803 17:37:45.575882    1673 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/functional-959000/client.crt: no such file or directory
E0803 17:39:05.801733    1673 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/addons-989000/client.crt: no such file or directory
E0803 17:39:07.496099    1673 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/functional-959000/client.crt: no such file or directory
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-960000 status -v=7 --alsologtostderr: exit status 7 (2m55.964237666s)

                                                
                                                
-- stdout --
	ha-960000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-960000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-960000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-960000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 17:37:01.866367    3108 out.go:291] Setting OutFile to fd 1 ...
	I0803 17:37:01.866522    3108 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 17:37:01.866525    3108 out.go:304] Setting ErrFile to fd 2...
	I0803 17:37:01.866528    3108 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 17:37:01.866658    3108 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1166/.minikube/bin
	I0803 17:37:01.866793    3108 out.go:298] Setting JSON to false
	I0803 17:37:01.866802    3108 mustload.go:65] Loading cluster: ha-960000
	I0803 17:37:01.866872    3108 notify.go:220] Checking for updates...
	I0803 17:37:01.867038    3108 config.go:182] Loaded profile config "ha-960000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 17:37:01.867046    3108 status.go:255] checking status of ha-960000 ...
	I0803 17:37:01.867830    3108 status.go:330] ha-960000 host status = "Running" (err=<nil>)
	I0803 17:37:01.867843    3108 host.go:66] Checking if "ha-960000" exists ...
	I0803 17:37:01.867933    3108 host.go:66] Checking if "ha-960000" exists ...
	I0803 17:37:01.868044    3108 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 17:37:01.868052    3108 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/ha-960000/id_rsa Username:docker}
	W0803 17:37:27.793226    3108 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0803 17:37:27.793391    3108 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0803 17:37:27.793416    3108 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0803 17:37:27.793425    3108 status.go:257] ha-960000 status: &{Name:ha-960000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0803 17:37:27.793455    3108 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0803 17:37:27.793463    3108 status.go:255] checking status of ha-960000-m02 ...
	I0803 17:37:27.793921    3108 status.go:330] ha-960000-m02 host status = "Stopped" (err=<nil>)
	I0803 17:37:27.793931    3108 status.go:343] host is not running, skipping remaining checks
	I0803 17:37:27.793936    3108 status.go:257] ha-960000-m02 status: &{Name:ha-960000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0803 17:37:27.793947    3108 status.go:255] checking status of ha-960000-m03 ...
	I0803 17:37:27.794892    3108 status.go:330] ha-960000-m03 host status = "Running" (err=<nil>)
	I0803 17:37:27.794903    3108 host.go:66] Checking if "ha-960000-m03" exists ...
	I0803 17:37:27.795059    3108 host.go:66] Checking if "ha-960000-m03" exists ...
	I0803 17:37:27.795213    3108 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 17:37:27.795222    3108 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/ha-960000-m03/id_rsa Username:docker}
	W0803 17:38:42.796034    3108 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0803 17:38:42.796086    3108 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0803 17:38:42.796094    3108 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0803 17:38:42.796097    3108 status.go:257] ha-960000-m03 status: &{Name:ha-960000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0803 17:38:42.796110    3108 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0803 17:38:42.796125    3108 status.go:255] checking status of ha-960000-m04 ...
	I0803 17:38:42.796963    3108 status.go:330] ha-960000-m04 host status = "Running" (err=<nil>)
	I0803 17:38:42.796976    3108 host.go:66] Checking if "ha-960000-m04" exists ...
	I0803 17:38:42.797087    3108 host.go:66] Checking if "ha-960000-m04" exists ...
	I0803 17:38:42.797208    3108 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 17:38:42.797216    3108 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/ha-960000-m04/id_rsa Username:docker}
	W0803 17:39:57.795621    3108 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0803 17:39:57.795685    3108 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0803 17:39:57.795694    3108 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0803 17:39:57.795698    3108 status.go:257] ha-960000-m04 status: &{Name:ha-960000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0803 17:39:57.795706    3108 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-960000 status -v=7 --alsologtostderr": ha-960000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-960000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-960000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-960000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-960000 status -v=7 --alsologtostderr": ha-960000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-960000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-960000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-960000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-960000 status -v=7 --alsologtostderr": ha-960000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-960000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-960000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-960000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-960000 -n ha-960000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-960000 -n ha-960000: exit status 3 (25.961154375s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0803 17:40:23.756521    3136 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0803 17:40:23.756533    3136 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-960000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (214.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (104.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E0803 17:41:23.631522    1673 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/functional-959000/client.crt: no such file or directory
ha_test.go:390: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1m18.129352458s)
ha_test.go:413: expected profile "ha-960000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-960000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-960000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-960000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\
":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docke
r\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-960000 -n ha-960000
E0803 17:41:51.333810    1673 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/functional-959000/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-960000 -n ha-960000: exit status 3 (25.964293375s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0803 17:42:07.844031    3154 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0803 17:42:07.844066    3154 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-960000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (104.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (208.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-960000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-960000 node start m02 -v=7 --alsologtostderr: exit status 80 (5.129880708s)

                                                
                                                
-- stdout --
	* Starting "ha-960000-m02" control-plane node in "ha-960000" cluster
	* Restarting existing qemu2 VM for "ha-960000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-960000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 17:42:07.914944    3161 out.go:291] Setting OutFile to fd 1 ...
	I0803 17:42:07.915273    3161 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 17:42:07.915277    3161 out.go:304] Setting ErrFile to fd 2...
	I0803 17:42:07.915280    3161 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 17:42:07.915455    3161 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1166/.minikube/bin
	I0803 17:42:07.915792    3161 mustload.go:65] Loading cluster: ha-960000
	I0803 17:42:07.916058    3161 config.go:182] Loaded profile config "ha-960000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	W0803 17:42:07.916346    3161 host.go:58] "ha-960000-m02" host status: Stopped
	I0803 17:42:07.920883    3161 out.go:177] * Starting "ha-960000-m02" control-plane node in "ha-960000" cluster
	I0803 17:42:07.924817    3161 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0803 17:42:07.924837    3161 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0803 17:42:07.924847    3161 cache.go:56] Caching tarball of preloaded images
	I0803 17:42:07.924971    3161 preload.go:172] Found /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0803 17:42:07.924979    3161 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0803 17:42:07.925047    3161 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/ha-960000/config.json ...
	I0803 17:42:07.925494    3161 start.go:360] acquireMachinesLock for ha-960000-m02: {Name:mk8118ba0191912f8ddd157ebc629e1eb9051406 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 17:42:07.925555    3161 start.go:364] duration metric: took 42.417µs to acquireMachinesLock for "ha-960000-m02"
	I0803 17:42:07.925567    3161 start.go:96] Skipping create...Using existing machine configuration
	I0803 17:42:07.925573    3161 fix.go:54] fixHost starting: m02
	I0803 17:42:07.925748    3161 fix.go:112] recreateIfNeeded on ha-960000-m02: state=Stopped err=<nil>
	W0803 17:42:07.925755    3161 fix.go:138] unexpected machine state, will restart: <nil>
	I0803 17:42:07.929771    3161 out.go:177] * Restarting existing qemu2 VM for "ha-960000-m02" ...
	I0803 17:42:07.933732    3161 qemu.go:418] Using hvf for hardware acceleration
	I0803 17:42:07.933782    3161 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/ha-960000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/ha-960000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/ha-960000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:0d:f6:02:cc:ef -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/ha-960000-m02/disk.qcow2
	I0803 17:42:07.936491    3161 main.go:141] libmachine: STDOUT: 
	I0803 17:42:07.936511    3161 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 17:42:07.936538    3161 fix.go:56] duration metric: took 10.965708ms for fixHost
	I0803 17:42:07.936542    3161 start.go:83] releasing machines lock for "ha-960000-m02", held for 10.981542ms
	W0803 17:42:07.936551    3161 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 17:42:07.936586    3161 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 17:42:07.936591    3161 start.go:729] Will try again in 5 seconds ...
	I0803 17:42:12.938415    3161 start.go:360] acquireMachinesLock for ha-960000-m02: {Name:mk8118ba0191912f8ddd157ebc629e1eb9051406 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 17:42:12.938829    3161 start.go:364] duration metric: took 338.417µs to acquireMachinesLock for "ha-960000-m02"
	I0803 17:42:12.938947    3161 start.go:96] Skipping create...Using existing machine configuration
	I0803 17:42:12.938960    3161 fix.go:54] fixHost starting: m02
	I0803 17:42:12.939575    3161 fix.go:112] recreateIfNeeded on ha-960000-m02: state=Stopped err=<nil>
	W0803 17:42:12.939599    3161 fix.go:138] unexpected machine state, will restart: <nil>
	I0803 17:42:12.943640    3161 out.go:177] * Restarting existing qemu2 VM for "ha-960000-m02" ...
	I0803 17:42:12.947520    3161 qemu.go:418] Using hvf for hardware acceleration
	I0803 17:42:12.947687    3161 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/ha-960000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/ha-960000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/ha-960000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:0d:f6:02:cc:ef -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/ha-960000-m02/disk.qcow2
	I0803 17:42:12.953459    3161 main.go:141] libmachine: STDOUT: 
	I0803 17:42:12.953509    3161 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 17:42:12.953556    3161 fix.go:56] duration metric: took 14.598792ms for fixHost
	I0803 17:42:12.953568    3161 start.go:83] releasing machines lock for "ha-960000-m02", held for 14.724458ms
	W0803 17:42:12.953702    3161 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-960000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-960000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 17:42:12.958583    3161 out.go:177] 
	W0803 17:42:12.962627    3161 out.go:239] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 17:42:12.962644    3161 out.go:239] * 
	* 
	W0803 17:42:12.968429    3161 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 17:42:12.973617    3161 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:422: I0803 17:42:07.914944    3161 out.go:291] Setting OutFile to fd 1 ...
I0803 17:42:07.915273    3161 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0803 17:42:07.915277    3161 out.go:304] Setting ErrFile to fd 2...
I0803 17:42:07.915280    3161 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0803 17:42:07.915455    3161 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1166/.minikube/bin
I0803 17:42:07.915792    3161 mustload.go:65] Loading cluster: ha-960000
I0803 17:42:07.916058    3161 config.go:182] Loaded profile config "ha-960000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
W0803 17:42:07.916346    3161 host.go:58] "ha-960000-m02" host status: Stopped
I0803 17:42:07.920883    3161 out.go:177] * Starting "ha-960000-m02" control-plane node in "ha-960000" cluster
I0803 17:42:07.924817    3161 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
I0803 17:42:07.924837    3161 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
I0803 17:42:07.924847    3161 cache.go:56] Caching tarball of preloaded images
I0803 17:42:07.924971    3161 preload.go:172] Found /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0803 17:42:07.924979    3161 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
I0803 17:42:07.925047    3161 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/ha-960000/config.json ...
I0803 17:42:07.925494    3161 start.go:360] acquireMachinesLock for ha-960000-m02: {Name:mk8118ba0191912f8ddd157ebc629e1eb9051406 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0803 17:42:07.925555    3161 start.go:364] duration metric: took 42.417µs to acquireMachinesLock for "ha-960000-m02"
I0803 17:42:07.925567    3161 start.go:96] Skipping create...Using existing machine configuration
I0803 17:42:07.925573    3161 fix.go:54] fixHost starting: m02
I0803 17:42:07.925748    3161 fix.go:112] recreateIfNeeded on ha-960000-m02: state=Stopped err=<nil>
W0803 17:42:07.925755    3161 fix.go:138] unexpected machine state, will restart: <nil>
I0803 17:42:07.929771    3161 out.go:177] * Restarting existing qemu2 VM for "ha-960000-m02" ...
I0803 17:42:07.933732    3161 qemu.go:418] Using hvf for hardware acceleration
I0803 17:42:07.933782    3161 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/ha-960000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/ha-960000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/ha-960000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:0d:f6:02:cc:ef -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/ha-960000-m02/disk.qcow2
I0803 17:42:07.936491    3161 main.go:141] libmachine: STDOUT: 
I0803 17:42:07.936511    3161 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0803 17:42:07.936538    3161 fix.go:56] duration metric: took 10.965708ms for fixHost
I0803 17:42:07.936542    3161 start.go:83] releasing machines lock for "ha-960000-m02", held for 10.981542ms
W0803 17:42:07.936551    3161 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0803 17:42:07.936586    3161 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0803 17:42:07.936591    3161 start.go:729] Will try again in 5 seconds ...
I0803 17:42:12.938415    3161 start.go:360] acquireMachinesLock for ha-960000-m02: {Name:mk8118ba0191912f8ddd157ebc629e1eb9051406 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0803 17:42:12.938829    3161 start.go:364] duration metric: took 338.417µs to acquireMachinesLock for "ha-960000-m02"
I0803 17:42:12.938947    3161 start.go:96] Skipping create...Using existing machine configuration
I0803 17:42:12.938960    3161 fix.go:54] fixHost starting: m02
I0803 17:42:12.939575    3161 fix.go:112] recreateIfNeeded on ha-960000-m02: state=Stopped err=<nil>
W0803 17:42:12.939599    3161 fix.go:138] unexpected machine state, will restart: <nil>
I0803 17:42:12.943640    3161 out.go:177] * Restarting existing qemu2 VM for "ha-960000-m02" ...
I0803 17:42:12.947520    3161 qemu.go:418] Using hvf for hardware acceleration
I0803 17:42:12.947687    3161 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/ha-960000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/ha-960000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/ha-960000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:0d:f6:02:cc:ef -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/ha-960000-m02/disk.qcow2
I0803 17:42:12.953459    3161 main.go:141] libmachine: STDOUT: 
I0803 17:42:12.953509    3161 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0803 17:42:12.953556    3161 fix.go:56] duration metric: took 14.598792ms for fixHost
I0803 17:42:12.953568    3161 start.go:83] releasing machines lock for "ha-960000-m02", held for 14.724458ms
W0803 17:42:12.953702    3161 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-960000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p ha-960000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0803 17:42:12.958583    3161 out.go:177] 
W0803 17:42:12.962627    3161 out.go:239] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0803 17:42:12.962644    3161 out.go:239] * 
* 
W0803 17:42:12.968429    3161 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0803 17:42:12.973617    3161 out.go:177] 
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-960000 node start m02 -v=7 --alsologtostderr": exit status 80
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-960000 status -v=7 --alsologtostderr
E0803 17:44:05.794726    1673 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/addons-989000/client.crt: no such file or directory
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-960000 status -v=7 --alsologtostderr: exit status 7 (2m57.423207916s)

                                                
                                                
-- stdout --
	ha-960000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-960000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-960000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-960000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 17:42:13.034154    3165 out.go:291] Setting OutFile to fd 1 ...
	I0803 17:42:13.034338    3165 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 17:42:13.034346    3165 out.go:304] Setting ErrFile to fd 2...
	I0803 17:42:13.034349    3165 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 17:42:13.034487    3165 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1166/.minikube/bin
	I0803 17:42:13.034645    3165 out.go:298] Setting JSON to false
	I0803 17:42:13.034660    3165 mustload.go:65] Loading cluster: ha-960000
	I0803 17:42:13.034696    3165 notify.go:220] Checking for updates...
	I0803 17:42:13.034934    3165 config.go:182] Loaded profile config "ha-960000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 17:42:13.034946    3165 status.go:255] checking status of ha-960000 ...
	I0803 17:42:13.035777    3165 status.go:330] ha-960000 host status = "Running" (err=<nil>)
	I0803 17:42:13.035789    3165 host.go:66] Checking if "ha-960000" exists ...
	I0803 17:42:13.035901    3165 host.go:66] Checking if "ha-960000" exists ...
	I0803 17:42:13.036041    3165 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 17:42:13.036048    3165 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/ha-960000/id_rsa Username:docker}
	W0803 17:42:13.036228    3165 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0803 17:42:13.036245    3165 retry.go:31] will retry after 292.676327ms: dial tcp 192.168.105.5:22: connect: host is down
	W0803 17:42:13.331230    3165 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0803 17:42:13.331272    3165 retry.go:31] will retry after 508.476046ms: dial tcp 192.168.105.5:22: connect: host is down
	W0803 17:42:13.842381    3165 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0803 17:42:13.842472    3165 retry.go:31] will retry after 620.560764ms: dial tcp 192.168.105.5:22: connect: host is down
	W0803 17:42:40.390754    3165 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0803 17:42:40.390829    3165 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0803 17:42:40.390838    3165 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0803 17:42:40.390843    3165 status.go:257] ha-960000 status: &{Name:ha-960000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0803 17:42:40.390854    3165 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0803 17:42:40.390858    3165 status.go:255] checking status of ha-960000-m02 ...
	I0803 17:42:40.391114    3165 status.go:330] ha-960000-m02 host status = "Stopped" (err=<nil>)
	I0803 17:42:40.391121    3165 status.go:343] host is not running, skipping remaining checks
	I0803 17:42:40.391123    3165 status.go:257] ha-960000-m02 status: &{Name:ha-960000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0803 17:42:40.391128    3165 status.go:255] checking status of ha-960000-m03 ...
	I0803 17:42:40.391738    3165 status.go:330] ha-960000-m03 host status = "Running" (err=<nil>)
	I0803 17:42:40.391745    3165 host.go:66] Checking if "ha-960000-m03" exists ...
	I0803 17:42:40.391857    3165 host.go:66] Checking if "ha-960000-m03" exists ...
	I0803 17:42:40.392000    3165 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 17:42:40.392007    3165 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/ha-960000-m03/id_rsa Username:docker}
	W0803 17:43:55.393067    3165 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0803 17:43:55.393261    3165 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0803 17:43:55.393295    3165 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0803 17:43:55.393314    3165 status.go:257] ha-960000-m03 status: &{Name:ha-960000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0803 17:43:55.393354    3165 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0803 17:43:55.393371    3165 status.go:255] checking status of ha-960000-m04 ...
	I0803 17:43:55.396492    3165 status.go:330] ha-960000-m04 host status = "Running" (err=<nil>)
	I0803 17:43:55.396522    3165 host.go:66] Checking if "ha-960000-m04" exists ...
	I0803 17:43:55.397040    3165 host.go:66] Checking if "ha-960000-m04" exists ...
	I0803 17:43:55.397605    3165 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 17:43:55.397631    3165 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/ha-960000-m04/id_rsa Username:docker}
	W0803 17:45:10.398455    3165 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0803 17:45:10.398502    3165 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0803 17:45:10.398511    3165 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0803 17:45:10.398516    3165 status.go:257] ha-960000-m04 status: &{Name:ha-960000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0803 17:45:10.398524    3165 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-960000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-960000 -n ha-960000
E0803 17:45:28.861624    1673 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/addons-989000/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-960000 -n ha-960000: exit status 3 (25.956978333s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0803 17:45:36.355017    3190 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0803 17:45:36.355026    3190 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-960000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (208.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (234.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-960000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-960000 -v=7 --alsologtostderr
E0803 17:49:05.748134    1673 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/addons-989000/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-960000 -v=7 --alsologtostderr: (3m49.022751375s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-960000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-960000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.234185791s)

                                                
                                                
-- stdout --
	* [ha-960000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-960000" primary control-plane node in "ha-960000" cluster
	* Restarting existing qemu2 VM for "ha-960000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-960000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 17:50:44.696467    3287 out.go:291] Setting OutFile to fd 1 ...
	I0803 17:50:44.696671    3287 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 17:50:44.696676    3287 out.go:304] Setting ErrFile to fd 2...
	I0803 17:50:44.696679    3287 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 17:50:44.696883    3287 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1166/.minikube/bin
	I0803 17:50:44.698238    3287 out.go:298] Setting JSON to false
	I0803 17:50:44.718524    3287 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3008,"bootTime":1722729636,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 17:50:44.718592    3287 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 17:50:44.723335    3287 out.go:177] * [ha-960000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0803 17:50:44.731490    3287 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 17:50:44.731513    3287 notify.go:220] Checking for updates...
	I0803 17:50:44.737390    3287 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	I0803 17:50:44.740462    3287 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 17:50:44.743391    3287 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 17:50:44.746418    3287 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	I0803 17:50:44.749432    3287 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 17:50:44.752760    3287 config.go:182] Loaded profile config "ha-960000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 17:50:44.752820    3287 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 17:50:44.757384    3287 out.go:177] * Using the qemu2 driver based on existing profile
	I0803 17:50:44.764463    3287 start.go:297] selected driver: qemu2
	I0803 17:50:44.764472    3287 start.go:901] validating driver "qemu2" against &{Name:ha-960000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.3 ClusterName:ha-960000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:
false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 17:50:44.764561    3287 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 17:50:44.767238    3287 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0803 17:50:44.767269    3287 cni.go:84] Creating CNI manager for ""
	I0803 17:50:44.767274    3287 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0803 17:50:44.767330    3287 start.go:340] cluster config:
	{Name:ha-960000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-960000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 17:50:44.771561    3287 iso.go:125] acquiring lock: {Name:mk5fb439c48468a4ea1254631a9a137d32075713 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 17:50:44.779327    3287 out.go:177] * Starting "ha-960000" primary control-plane node in "ha-960000" cluster
	I0803 17:50:44.783401    3287 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0803 17:50:44.783418    3287 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0803 17:50:44.783431    3287 cache.go:56] Caching tarball of preloaded images
	I0803 17:50:44.783504    3287 preload.go:172] Found /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0803 17:50:44.783510    3287 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0803 17:50:44.783587    3287 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/ha-960000/config.json ...
	I0803 17:50:44.784014    3287 start.go:360] acquireMachinesLock for ha-960000: {Name:mk8118ba0191912f8ddd157ebc629e1eb9051406 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 17:50:44.784052    3287 start.go:364] duration metric: took 31.542µs to acquireMachinesLock for "ha-960000"
	I0803 17:50:44.784061    3287 start.go:96] Skipping create...Using existing machine configuration
	I0803 17:50:44.784067    3287 fix.go:54] fixHost starting: 
	I0803 17:50:44.784200    3287 fix.go:112] recreateIfNeeded on ha-960000: state=Stopped err=<nil>
	W0803 17:50:44.784210    3287 fix.go:138] unexpected machine state, will restart: <nil>
	I0803 17:50:44.787407    3287 out.go:177] * Restarting existing qemu2 VM for "ha-960000" ...
	I0803 17:50:44.795475    3287 qemu.go:418] Using hvf for hardware acceleration
	I0803 17:50:44.795518    3287 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/ha-960000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/ha-960000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/ha-960000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:c8:4f:ef:86:33 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/ha-960000/disk.qcow2
	I0803 17:50:44.797727    3287 main.go:141] libmachine: STDOUT: 
	I0803 17:50:44.797749    3287 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 17:50:44.797781    3287 fix.go:56] duration metric: took 13.715416ms for fixHost
	I0803 17:50:44.797786    3287 start.go:83] releasing machines lock for "ha-960000", held for 13.729542ms
	W0803 17:50:44.797794    3287 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 17:50:44.797832    3287 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 17:50:44.797838    3287 start.go:729] Will try again in 5 seconds ...
	I0803 17:50:49.799874    3287 start.go:360] acquireMachinesLock for ha-960000: {Name:mk8118ba0191912f8ddd157ebc629e1eb9051406 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 17:50:49.800381    3287 start.go:364] duration metric: took 408.584µs to acquireMachinesLock for "ha-960000"
	I0803 17:50:49.800548    3287 start.go:96] Skipping create...Using existing machine configuration
	I0803 17:50:49.800571    3287 fix.go:54] fixHost starting: 
	I0803 17:50:49.801317    3287 fix.go:112] recreateIfNeeded on ha-960000: state=Stopped err=<nil>
	W0803 17:50:49.801344    3287 fix.go:138] unexpected machine state, will restart: <nil>
	I0803 17:50:49.808824    3287 out.go:177] * Restarting existing qemu2 VM for "ha-960000" ...
	I0803 17:50:49.812761    3287 qemu.go:418] Using hvf for hardware acceleration
	I0803 17:50:49.813007    3287 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/ha-960000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/ha-960000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/ha-960000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:c8:4f:ef:86:33 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/ha-960000/disk.qcow2
	I0803 17:50:49.822875    3287 main.go:141] libmachine: STDOUT: 
	I0803 17:50:49.822952    3287 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 17:50:49.823070    3287 fix.go:56] duration metric: took 22.502625ms for fixHost
	I0803 17:50:49.823090    3287 start.go:83] releasing machines lock for "ha-960000", held for 22.685ms
	W0803 17:50:49.823294    3287 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-960000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-960000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 17:50:49.838875    3287 out.go:177] 
	W0803 17:50:49.841787    3287 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 17:50:49.841831    3287 out.go:239] * 
	* 
	W0803 17:50:49.844157    3287 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 17:50:49.856692    3287 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-960000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-960000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-960000 -n ha-960000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-960000 -n ha-960000: exit status 7 (32.432209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-960000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (234.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-960000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-960000 node delete m03 -v=7 --alsologtostderr: exit status 83 (38.996ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-960000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-960000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 17:50:49.995004    3300 out.go:291] Setting OutFile to fd 1 ...
	I0803 17:50:49.995253    3300 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 17:50:49.995256    3300 out.go:304] Setting ErrFile to fd 2...
	I0803 17:50:49.995257    3300 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 17:50:49.995392    3300 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1166/.minikube/bin
	I0803 17:50:49.995609    3300 mustload.go:65] Loading cluster: ha-960000
	I0803 17:50:49.995830    3300 config.go:182] Loaded profile config "ha-960000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	W0803 17:50:49.996125    3300 out.go:239] ! The control-plane node ha-960000 host is not running (will try others): state=Stopped
	! The control-plane node ha-960000 host is not running (will try others): state=Stopped
	W0803 17:50:49.996228    3300 out.go:239] ! The control-plane node ha-960000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-960000-m02 host is not running (will try others): state=Stopped
	I0803 17:50:50.000349    3300 out.go:177] * The control-plane node ha-960000-m03 host is not running: state=Stopped
	I0803 17:50:50.003256    3300 out.go:177]   To start a cluster, run: "minikube start -p ha-960000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-960000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-960000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-960000 status -v=7 --alsologtostderr: exit status 7 (29.369667ms)

                                                
                                                
-- stdout --
	ha-960000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-960000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-960000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-960000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 17:50:50.034514    3302 out.go:291] Setting OutFile to fd 1 ...
	I0803 17:50:50.034646    3302 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 17:50:50.034649    3302 out.go:304] Setting ErrFile to fd 2...
	I0803 17:50:50.034652    3302 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 17:50:50.034774    3302 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1166/.minikube/bin
	I0803 17:50:50.034889    3302 out.go:298] Setting JSON to false
	I0803 17:50:50.034898    3302 mustload.go:65] Loading cluster: ha-960000
	I0803 17:50:50.034944    3302 notify.go:220] Checking for updates...
	I0803 17:50:50.035139    3302 config.go:182] Loaded profile config "ha-960000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 17:50:50.035147    3302 status.go:255] checking status of ha-960000 ...
	I0803 17:50:50.035352    3302 status.go:330] ha-960000 host status = "Stopped" (err=<nil>)
	I0803 17:50:50.035355    3302 status.go:343] host is not running, skipping remaining checks
	I0803 17:50:50.035358    3302 status.go:257] ha-960000 status: &{Name:ha-960000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0803 17:50:50.035367    3302 status.go:255] checking status of ha-960000-m02 ...
	I0803 17:50:50.035456    3302 status.go:330] ha-960000-m02 host status = "Stopped" (err=<nil>)
	I0803 17:50:50.035459    3302 status.go:343] host is not running, skipping remaining checks
	I0803 17:50:50.035461    3302 status.go:257] ha-960000-m02 status: &{Name:ha-960000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0803 17:50:50.035465    3302 status.go:255] checking status of ha-960000-m03 ...
	I0803 17:50:50.035555    3302 status.go:330] ha-960000-m03 host status = "Stopped" (err=<nil>)
	I0803 17:50:50.035557    3302 status.go:343] host is not running, skipping remaining checks
	I0803 17:50:50.035560    3302 status.go:257] ha-960000-m03 status: &{Name:ha-960000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0803 17:50:50.035563    3302 status.go:255] checking status of ha-960000-m04 ...
	I0803 17:50:50.035662    3302 status.go:330] ha-960000-m04 host status = "Stopped" (err=<nil>)
	I0803 17:50:50.035664    3302 status.go:343] host is not running, skipping remaining checks
	I0803 17:50:50.035666    3302 status.go:257] ha-960000-m04 status: &{Name:ha-960000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-960000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-960000 -n ha-960000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-960000 -n ha-960000: exit status 7 (30.26375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-960000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1.323145625s)
ha_test.go:413: expected profile "ha-960000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-960000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-960000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-960000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kub
evirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\
"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-960000 -n ha-960000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-960000 -n ha-960000: exit status 7 (53.173083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-960000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (202.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-960000 stop -v=7 --alsologtostderr
E0803 17:51:23.577116    1673 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/functional-959000/client.crt: no such file or directory
E0803 17:52:46.639591    1673 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/functional-959000/client.crt: no such file or directory
E0803 17:54:05.739031    1673 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/addons-989000/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-960000 stop -v=7 --alsologtostderr: (3m21.990885833s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-960000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-960000 status -v=7 --alsologtostderr: exit status 7 (65.567209ms)

                                                
                                                
-- stdout --
	ha-960000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-960000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-960000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-960000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 17:54:13.492145    3374 out.go:291] Setting OutFile to fd 1 ...
	I0803 17:54:13.492350    3374 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 17:54:13.492354    3374 out.go:304] Setting ErrFile to fd 2...
	I0803 17:54:13.492358    3374 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 17:54:13.492522    3374 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1166/.minikube/bin
	I0803 17:54:13.492689    3374 out.go:298] Setting JSON to false
	I0803 17:54:13.492701    3374 mustload.go:65] Loading cluster: ha-960000
	I0803 17:54:13.492743    3374 notify.go:220] Checking for updates...
	I0803 17:54:13.493019    3374 config.go:182] Loaded profile config "ha-960000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 17:54:13.493029    3374 status.go:255] checking status of ha-960000 ...
	I0803 17:54:13.493321    3374 status.go:330] ha-960000 host status = "Stopped" (err=<nil>)
	I0803 17:54:13.493326    3374 status.go:343] host is not running, skipping remaining checks
	I0803 17:54:13.493329    3374 status.go:257] ha-960000 status: &{Name:ha-960000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0803 17:54:13.493342    3374 status.go:255] checking status of ha-960000-m02 ...
	I0803 17:54:13.493474    3374 status.go:330] ha-960000-m02 host status = "Stopped" (err=<nil>)
	I0803 17:54:13.493481    3374 status.go:343] host is not running, skipping remaining checks
	I0803 17:54:13.493484    3374 status.go:257] ha-960000-m02 status: &{Name:ha-960000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0803 17:54:13.493489    3374 status.go:255] checking status of ha-960000-m03 ...
	I0803 17:54:13.493621    3374 status.go:330] ha-960000-m03 host status = "Stopped" (err=<nil>)
	I0803 17:54:13.493626    3374 status.go:343] host is not running, skipping remaining checks
	I0803 17:54:13.493628    3374 status.go:257] ha-960000-m03 status: &{Name:ha-960000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0803 17:54:13.493633    3374 status.go:255] checking status of ha-960000-m04 ...
	I0803 17:54:13.493758    3374 status.go:330] ha-960000-m04 host status = "Stopped" (err=<nil>)
	I0803 17:54:13.493762    3374 status.go:343] host is not running, skipping remaining checks
	I0803 17:54:13.493764    3374 status.go:257] ha-960000-m04 status: &{Name:ha-960000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-960000 status -v=7 --alsologtostderr": ha-960000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-960000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-960000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-960000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-960000 status -v=7 --alsologtostderr": ha-960000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-960000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-960000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-960000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-960000 status -v=7 --alsologtostderr": ha-960000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-960000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-960000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-960000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-960000 -n ha-960000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-960000 -n ha-960000: exit status 7 (32.793208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-960000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (202.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-960000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-960000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.1788775s)

                                                
                                                
-- stdout --
	* [ha-960000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-960000" primary control-plane node in "ha-960000" cluster
	* Restarting existing qemu2 VM for "ha-960000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-960000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 17:54:13.555512    3378 out.go:291] Setting OutFile to fd 1 ...
	I0803 17:54:13.555645    3378 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 17:54:13.555651    3378 out.go:304] Setting ErrFile to fd 2...
	I0803 17:54:13.555653    3378 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 17:54:13.555787    3378 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1166/.minikube/bin
	I0803 17:54:13.556816    3378 out.go:298] Setting JSON to false
	I0803 17:54:13.573109    3378 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3217,"bootTime":1722729636,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 17:54:13.573171    3378 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 17:54:13.578697    3378 out.go:177] * [ha-960000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0803 17:54:13.586632    3378 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 17:54:13.586683    3378 notify.go:220] Checking for updates...
	I0803 17:54:13.592585    3378 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	I0803 17:54:13.595565    3378 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 17:54:13.596737    3378 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 17:54:13.599550    3378 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	I0803 17:54:13.602597    3378 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 17:54:13.605930    3378 config.go:182] Loaded profile config "ha-960000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 17:54:13.606209    3378 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 17:54:13.610522    3378 out.go:177] * Using the qemu2 driver based on existing profile
	I0803 17:54:13.617613    3378 start.go:297] selected driver: qemu2
	I0803 17:54:13.617620    3378 start.go:901] validating driver "qemu2" against &{Name:ha-960000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.3 ClusterName:ha-960000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storage
class:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 17:54:13.617687    3378 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 17:54:13.619964    3378 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0803 17:54:13.620004    3378 cni.go:84] Creating CNI manager for ""
	I0803 17:54:13.620010    3378 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0803 17:54:13.620064    3378 start.go:340] cluster config:
	{Name:ha-960000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-960000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 17:54:13.623607    3378 iso.go:125] acquiring lock: {Name:mk5fb439c48468a4ea1254631a9a137d32075713 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 17:54:13.632616    3378 out.go:177] * Starting "ha-960000" primary control-plane node in "ha-960000" cluster
	I0803 17:54:13.636463    3378 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0803 17:54:13.636483    3378 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0803 17:54:13.636493    3378 cache.go:56] Caching tarball of preloaded images
	I0803 17:54:13.636551    3378 preload.go:172] Found /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0803 17:54:13.636556    3378 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0803 17:54:13.636622    3378 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/ha-960000/config.json ...
	I0803 17:54:13.637015    3378 start.go:360] acquireMachinesLock for ha-960000: {Name:mk8118ba0191912f8ddd157ebc629e1eb9051406 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 17:54:13.637050    3378 start.go:364] duration metric: took 29.833µs to acquireMachinesLock for "ha-960000"
	I0803 17:54:13.637058    3378 start.go:96] Skipping create...Using existing machine configuration
	I0803 17:54:13.637066    3378 fix.go:54] fixHost starting: 
	I0803 17:54:13.637178    3378 fix.go:112] recreateIfNeeded on ha-960000: state=Stopped err=<nil>
	W0803 17:54:13.637187    3378 fix.go:138] unexpected machine state, will restart: <nil>
	I0803 17:54:13.641630    3378 out.go:177] * Restarting existing qemu2 VM for "ha-960000" ...
	I0803 17:54:13.649593    3378 qemu.go:418] Using hvf for hardware acceleration
	I0803 17:54:13.649647    3378 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/ha-960000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/ha-960000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/ha-960000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:c8:4f:ef:86:33 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/ha-960000/disk.qcow2
	I0803 17:54:13.651709    3378 main.go:141] libmachine: STDOUT: 
	I0803 17:54:13.651730    3378 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 17:54:13.651756    3378 fix.go:56] duration metric: took 14.691542ms for fixHost
	I0803 17:54:13.651761    3378 start.go:83] releasing machines lock for "ha-960000", held for 14.70675ms
	W0803 17:54:13.651768    3378 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 17:54:13.651803    3378 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 17:54:13.651807    3378 start.go:729] Will try again in 5 seconds ...
	I0803 17:54:18.653852    3378 start.go:360] acquireMachinesLock for ha-960000: {Name:mk8118ba0191912f8ddd157ebc629e1eb9051406 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 17:54:18.654296    3378 start.go:364] duration metric: took 351.958µs to acquireMachinesLock for "ha-960000"
	I0803 17:54:18.654414    3378 start.go:96] Skipping create...Using existing machine configuration
	I0803 17:54:18.654434    3378 fix.go:54] fixHost starting: 
	I0803 17:54:18.655229    3378 fix.go:112] recreateIfNeeded on ha-960000: state=Stopped err=<nil>
	W0803 17:54:18.655255    3378 fix.go:138] unexpected machine state, will restart: <nil>
	I0803 17:54:18.659719    3378 out.go:177] * Restarting existing qemu2 VM for "ha-960000" ...
	I0803 17:54:18.666597    3378 qemu.go:418] Using hvf for hardware acceleration
	I0803 17:54:18.666824    3378 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/ha-960000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/ha-960000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/ha-960000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:c8:4f:ef:86:33 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/ha-960000/disk.qcow2
	I0803 17:54:18.675771    3378 main.go:141] libmachine: STDOUT: 
	I0803 17:54:18.675842    3378 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 17:54:18.675914    3378 fix.go:56] duration metric: took 21.480458ms for fixHost
	I0803 17:54:18.675940    3378 start.go:83] releasing machines lock for "ha-960000", held for 21.622083ms
	W0803 17:54:18.676176    3378 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-960000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-960000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 17:54:18.683646    3378 out.go:177] 
	W0803 17:54:18.687688    3378 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 17:54:18.687742    3378 out.go:239] * 
	* 
	W0803 17:54:18.690724    3378 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 17:54:18.694699    3378 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-960000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-960000 -n ha-960000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-960000 -n ha-960000: exit status 7 (66.267583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-960000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-960000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-960000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-960000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-960000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kub
evirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\
"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-960000 -n ha-960000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-960000 -n ha-960000: exit status 7 (29.774292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-960000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-960000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-960000 --control-plane -v=7 --alsologtostderr: exit status 83 (42.733917ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-960000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-960000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 17:54:18.882736    3393 out.go:291] Setting OutFile to fd 1 ...
	I0803 17:54:18.882885    3393 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 17:54:18.882888    3393 out.go:304] Setting ErrFile to fd 2...
	I0803 17:54:18.882890    3393 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 17:54:18.883011    3393 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1166/.minikube/bin
	I0803 17:54:18.883246    3393 mustload.go:65] Loading cluster: ha-960000
	I0803 17:54:18.883450    3393 config.go:182] Loaded profile config "ha-960000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	W0803 17:54:18.883764    3393 out.go:239] ! The control-plane node ha-960000 host is not running (will try others): state=Stopped
	! The control-plane node ha-960000 host is not running (will try others): state=Stopped
	W0803 17:54:18.883867    3393 out.go:239] ! The control-plane node ha-960000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-960000-m02 host is not running (will try others): state=Stopped
	I0803 17:54:18.888413    3393 out.go:177] * The control-plane node ha-960000-m03 host is not running: state=Stopped
	I0803 17:54:18.892566    3393 out.go:177]   To start a cluster, run: "minikube start -p ha-960000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-960000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-960000 -n ha-960000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-960000 -n ha-960000: exit status 7 (29.572292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-960000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (9.91s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-438000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-438000 --driver=qemu2 : exit status 80 (9.839271625s)

                                                
                                                
-- stdout --
	* [image-438000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-438000" primary control-plane node in "image-438000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-438000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-438000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-438000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-438000 -n image-438000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-438000 -n image-438000: exit status 7 (67.617292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-438000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (9.91s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.76s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-626000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-626000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.75930725s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"48f77b6a-5aef-4e63-b3a4-b252957f0dc3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-626000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"435438e8-2d1a-4764-9c68-9dd6cd1b3558","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19364"}}
	{"specversion":"1.0","id":"899a2b54-7ba0-4f94-87e7-80528c159be6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig"}}
	{"specversion":"1.0","id":"3e7ffd06-0461-4194-89af-7d08efcf06fa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"ed35d98f-d802-4a45-9235-e13516e3162e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d134978f-776e-4048-aed9-d704a73e9861","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube"}}
	{"specversion":"1.0","id":"817a0f80-f57f-4fbd-be7e-e9fd4fd52f45","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c78842f8-2be3-4d27-9d6c-260e15fef61f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"d9fd4af3-30aa-4e42-8f4e-ef524e8cf70f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"9b060ae9-5ec4-46cc-ae15-3c66e17481f9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-626000\" primary control-plane node in \"json-output-626000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"f63a8952-e7ea-4ea5-b756-02a9d5505d3b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"c4958988-63ab-4d2b-a1c8-eedbae0c1dab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-626000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"9728de00-61f4-4f7a-8696-9ccaf2805242","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"583b6677-0786-421e-bd1e-bb51579d8be5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"2924a6ce-ca38-4159-91b0-e905dac2430a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-626000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"ecf7e77b-177e-4e75-89c7-0438f0185fee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"d797f46c-e01f-4e3f-a1d1-d64ccb60cd12","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-626000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.76s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-626000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-626000 --output=json --user=testUser: exit status 83 (76.674958ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7a58e598-fd70-4e5f-af5c-d1b307513df1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-626000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"bafde59e-5c17-43d3-ab75-e6e9e1d13b4f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-626000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-626000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.04s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-626000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-626000 --output=json --user=testUser: exit status 83 (44.19575ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-626000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-626000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-626000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-626000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (10.18s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-150000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-150000 --driver=qemu2 : exit status 80 (9.887302916s)

                                                
                                                
-- stdout --
	* [first-150000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-150000" primary control-plane node in "first-150000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-150000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-150000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-150000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-08-03 17:54:51.446803 -0700 PDT m=+2096.679187293
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-151000 -n second-151000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-151000 -n second-151000: exit status 85 (78.683833ms)

                                                
                                                
-- stdout --
	* Profile "second-151000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-151000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-151000" host is not running, skipping log retrieval (state="* Profile \"second-151000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-151000\"")
helpers_test.go:175: Cleaning up "second-151000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-151000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-08-03 17:54:51.636403 -0700 PDT m=+2096.868792584
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-150000 -n first-150000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-150000 -n first-150000: exit status 7 (29.34125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-150000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-150000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-150000
--- FAIL: TestMinikubeProfile (10.18s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.94s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-396000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-396000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (9.863720958s)

                                                
                                                
-- stdout --
	* [mount-start-1-396000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-396000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-396000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-396000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-396000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-396000 -n mount-start-1-396000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-396000 -n mount-start-1-396000: exit status 7 (70.985708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-396000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (9.94s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-483000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-483000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.838094417s)

                                                
                                                
-- stdout --
	* [multinode-483000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-483000" primary control-plane node in "multinode-483000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-483000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 17:55:01.877338    3529 out.go:291] Setting OutFile to fd 1 ...
	I0803 17:55:01.877458    3529 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 17:55:01.877466    3529 out.go:304] Setting ErrFile to fd 2...
	I0803 17:55:01.877471    3529 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 17:55:01.877607    3529 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1166/.minikube/bin
	I0803 17:55:01.878629    3529 out.go:298] Setting JSON to false
	I0803 17:55:01.894939    3529 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3265,"bootTime":1722729636,"procs":452,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 17:55:01.895024    3529 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 17:55:01.901608    3529 out.go:177] * [multinode-483000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0803 17:55:01.908624    3529 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 17:55:01.908666    3529 notify.go:220] Checking for updates...
	I0803 17:55:01.915585    3529 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	I0803 17:55:01.918488    3529 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 17:55:01.921586    3529 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 17:55:01.924594    3529 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	I0803 17:55:01.925978    3529 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 17:55:01.929728    3529 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 17:55:01.933522    3529 out.go:177] * Using the qemu2 driver based on user configuration
	I0803 17:55:01.938546    3529 start.go:297] selected driver: qemu2
	I0803 17:55:01.938551    3529 start.go:901] validating driver "qemu2" against <nil>
	I0803 17:55:01.938558    3529 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 17:55:01.940729    3529 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0803 17:55:01.943561    3529 out.go:177] * Automatically selected the socket_vmnet network
	I0803 17:55:01.946722    3529 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0803 17:55:01.946748    3529 cni.go:84] Creating CNI manager for ""
	I0803 17:55:01.946753    3529 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0803 17:55:01.946758    3529 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0803 17:55:01.946787    3529 start.go:340] cluster config:
	{Name:multinode-483000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-483000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 17:55:01.950514    3529 iso.go:125] acquiring lock: {Name:mk5fb439c48468a4ea1254631a9a137d32075713 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 17:55:01.958600    3529 out.go:177] * Starting "multinode-483000" primary control-plane node in "multinode-483000" cluster
	I0803 17:55:01.962477    3529 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0803 17:55:01.962493    3529 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0803 17:55:01.962507    3529 cache.go:56] Caching tarball of preloaded images
	I0803 17:55:01.962567    3529 preload.go:172] Found /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0803 17:55:01.962573    3529 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0803 17:55:01.962781    3529 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/multinode-483000/config.json ...
	I0803 17:55:01.962793    3529 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/multinode-483000/config.json: {Name:mkd8649da579c0a9e617fe713602880a55ae7520 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 17:55:01.963013    3529 start.go:360] acquireMachinesLock for multinode-483000: {Name:mk8118ba0191912f8ddd157ebc629e1eb9051406 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 17:55:01.963047    3529 start.go:364] duration metric: took 27.875µs to acquireMachinesLock for "multinode-483000"
	I0803 17:55:01.963057    3529 start.go:93] Provisioning new machine with config: &{Name:multinode-483000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:multinode-483000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 17:55:01.963090    3529 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 17:55:01.974561    3529 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0803 17:55:01.992645    3529 start.go:159] libmachine.API.Create for "multinode-483000" (driver="qemu2")
	I0803 17:55:01.992673    3529 client.go:168] LocalClient.Create starting
	I0803 17:55:01.992736    3529 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/ca.pem
	I0803 17:55:01.992766    3529 main.go:141] libmachine: Decoding PEM data...
	I0803 17:55:01.992774    3529 main.go:141] libmachine: Parsing certificate...
	I0803 17:55:01.992818    3529 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/cert.pem
	I0803 17:55:01.992840    3529 main.go:141] libmachine: Decoding PEM data...
	I0803 17:55:01.992853    3529 main.go:141] libmachine: Parsing certificate...
	I0803 17:55:01.993200    3529 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1166/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 17:55:02.148115    3529 main.go:141] libmachine: Creating SSH key...
	I0803 17:55:02.238585    3529 main.go:141] libmachine: Creating Disk image...
	I0803 17:55:02.238590    3529 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 17:55:02.238759    3529 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/multinode-483000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/multinode-483000/disk.qcow2
	I0803 17:55:02.248062    3529 main.go:141] libmachine: STDOUT: 
	I0803 17:55:02.248076    3529 main.go:141] libmachine: STDERR: 
	I0803 17:55:02.248149    3529 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/multinode-483000/disk.qcow2 +20000M
	I0803 17:55:02.255996    3529 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 17:55:02.256013    3529 main.go:141] libmachine: STDERR: 
	I0803 17:55:02.256023    3529 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/multinode-483000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/multinode-483000/disk.qcow2
	I0803 17:55:02.256029    3529 main.go:141] libmachine: Starting QEMU VM...
	I0803 17:55:02.256041    3529 qemu.go:418] Using hvf for hardware acceleration
	I0803 17:55:02.256070    3529 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/multinode-483000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/multinode-483000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/multinode-483000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:7c:d3:75:6b:74 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/multinode-483000/disk.qcow2
	I0803 17:55:02.257685    3529 main.go:141] libmachine: STDOUT: 
	I0803 17:55:02.257699    3529 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 17:55:02.257717    3529 client.go:171] duration metric: took 265.047ms to LocalClient.Create
	I0803 17:55:04.259843    3529 start.go:128] duration metric: took 2.296807875s to createHost
	I0803 17:55:04.259954    3529 start.go:83] releasing machines lock for "multinode-483000", held for 2.296911125s
	W0803 17:55:04.260032    3529 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 17:55:04.267365    3529 out.go:177] * Deleting "multinode-483000" in qemu2 ...
	W0803 17:55:04.296926    3529 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 17:55:04.296955    3529 start.go:729] Will try again in 5 seconds ...
	I0803 17:55:09.298963    3529 start.go:360] acquireMachinesLock for multinode-483000: {Name:mk8118ba0191912f8ddd157ebc629e1eb9051406 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 17:55:09.299416    3529 start.go:364] duration metric: took 342.292µs to acquireMachinesLock for "multinode-483000"
	I0803 17:55:09.299550    3529 start.go:93] Provisioning new machine with config: &{Name:multinode-483000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:multinode-483000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 17:55:09.299842    3529 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 17:55:09.315407    3529 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0803 17:55:09.364128    3529 start.go:159] libmachine.API.Create for "multinode-483000" (driver="qemu2")
	I0803 17:55:09.364171    3529 client.go:168] LocalClient.Create starting
	I0803 17:55:09.364285    3529 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/ca.pem
	I0803 17:55:09.364345    3529 main.go:141] libmachine: Decoding PEM data...
	I0803 17:55:09.364361    3529 main.go:141] libmachine: Parsing certificate...
	I0803 17:55:09.364424    3529 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/cert.pem
	I0803 17:55:09.364469    3529 main.go:141] libmachine: Decoding PEM data...
	I0803 17:55:09.364479    3529 main.go:141] libmachine: Parsing certificate...
	I0803 17:55:09.364970    3529 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1166/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 17:55:09.545310    3529 main.go:141] libmachine: Creating SSH key...
	I0803 17:55:09.624641    3529 main.go:141] libmachine: Creating Disk image...
	I0803 17:55:09.624648    3529 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 17:55:09.624827    3529 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/multinode-483000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/multinode-483000/disk.qcow2
	I0803 17:55:09.634101    3529 main.go:141] libmachine: STDOUT: 
	I0803 17:55:09.634116    3529 main.go:141] libmachine: STDERR: 
	I0803 17:55:09.634182    3529 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/multinode-483000/disk.qcow2 +20000M
	I0803 17:55:09.642061    3529 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 17:55:09.642083    3529 main.go:141] libmachine: STDERR: 
	I0803 17:55:09.642095    3529 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/multinode-483000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/multinode-483000/disk.qcow2
	I0803 17:55:09.642100    3529 main.go:141] libmachine: Starting QEMU VM...
	I0803 17:55:09.642114    3529 qemu.go:418] Using hvf for hardware acceleration
	I0803 17:55:09.642146    3529 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/multinode-483000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/multinode-483000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/multinode-483000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:75:91:eb:21:b5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/multinode-483000/disk.qcow2
	I0803 17:55:09.643852    3529 main.go:141] libmachine: STDOUT: 
	I0803 17:55:09.643868    3529 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 17:55:09.643888    3529 client.go:171] duration metric: took 279.708458ms to LocalClient.Create
	I0803 17:55:11.646042    3529 start.go:128] duration metric: took 2.346243792s to createHost
	I0803 17:55:11.646099    3529 start.go:83] releasing machines lock for "multinode-483000", held for 2.346718959s
	W0803 17:55:11.646552    3529 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-483000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-483000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 17:55:11.657158    3529 out.go:177] 
	W0803 17:55:11.661218    3529 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 17:55:11.661267    3529 out.go:239] * 
	* 
	W0803 17:55:11.664024    3529 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 17:55:11.673089    3529 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-483000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-483000 -n multinode-483000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-483000 -n multinode-483000: exit status 7 (66.047666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-483000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.91s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (75.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-483000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-483000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (124.02775ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-483000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-483000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-483000 -- rollout status deployment/busybox: exit status 1 (57.803042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-483000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-483000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-483000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (56.767541ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-483000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-483000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-483000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.356667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-483000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-483000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-483000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.919375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-483000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-483000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-483000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.588416ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-483000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-483000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-483000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.655417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-483000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-483000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-483000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.256875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-483000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-483000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-483000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.651083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-483000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-483000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-483000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.35325ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-483000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-483000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-483000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.0745ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-483000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
E0803 17:56:23.567721    1673 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/functional-959000/client.crt: no such file or directory
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-483000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-483000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.482708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-483000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-483000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-483000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.638334ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-483000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-483000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-483000 -- exec  -- nslookup kubernetes.io: exit status 1 (56.29725ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-483000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-483000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-483000 -- exec  -- nslookup kubernetes.default: exit status 1 (55.721583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-483000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-483000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-483000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (55.510209ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-483000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-483000 -n multinode-483000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-483000 -n multinode-483000: exit status 7 (29.909416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-483000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (75.81s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-483000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-483000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.3655ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-483000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-483000 -n multinode-483000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-483000 -n multinode-483000: exit status 7 (29.688167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-483000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-483000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-483000 -v 3 --alsologtostderr: exit status 83 (38.151084ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-483000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-483000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 17:56:27.681601    3611 out.go:291] Setting OutFile to fd 1 ...
	I0803 17:56:27.681754    3611 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 17:56:27.681757    3611 out.go:304] Setting ErrFile to fd 2...
	I0803 17:56:27.681760    3611 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 17:56:27.681910    3611 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1166/.minikube/bin
	I0803 17:56:27.682154    3611 mustload.go:65] Loading cluster: multinode-483000
	I0803 17:56:27.682340    3611 config.go:182] Loaded profile config "multinode-483000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 17:56:27.686824    3611 out.go:177] * The control-plane node multinode-483000 host is not running: state=Stopped
	I0803 17:56:27.689807    3611 out.go:177]   To start a cluster, run: "minikube start -p multinode-483000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-483000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-483000 -n multinode-483000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-483000 -n multinode-483000: exit status 7 (28.628208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-483000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-483000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-483000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (29.255959ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-483000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-483000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-483000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-483000 -n multinode-483000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-483000 -n multinode-483000: exit status 7 (29.40425ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-483000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-483000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-483000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-483000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"multinode-483000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-483000 -n multinode-483000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-483000 -n multinode-483000: exit status 7 (28.775667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-483000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-483000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-483000 status --output json --alsologtostderr: exit status 7 (29.18425ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-483000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 17:56:27.882431    3623 out.go:291] Setting OutFile to fd 1 ...
	I0803 17:56:27.882570    3623 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 17:56:27.882574    3623 out.go:304] Setting ErrFile to fd 2...
	I0803 17:56:27.882576    3623 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 17:56:27.882721    3623 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1166/.minikube/bin
	I0803 17:56:27.882833    3623 out.go:298] Setting JSON to true
	I0803 17:56:27.882842    3623 mustload.go:65] Loading cluster: multinode-483000
	I0803 17:56:27.882922    3623 notify.go:220] Checking for updates...
	I0803 17:56:27.883032    3623 config.go:182] Loaded profile config "multinode-483000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 17:56:27.883037    3623 status.go:255] checking status of multinode-483000 ...
	I0803 17:56:27.883261    3623 status.go:330] multinode-483000 host status = "Stopped" (err=<nil>)
	I0803 17:56:27.883265    3623 status.go:343] host is not running, skipping remaining checks
	I0803 17:56:27.883267    3623 status.go:257] multinode-483000 status: &{Name:multinode-483000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-483000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-483000 -n multinode-483000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-483000 -n multinode-483000: exit status 7 (29.692458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-483000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-483000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-483000 node stop m03: exit status 85 (44.258291ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-483000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-483000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-483000 status: exit status 7 (29.375333ms)

                                                
                                                
-- stdout --
	multinode-483000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-483000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-483000 status --alsologtostderr: exit status 7 (29.223333ms)

                                                
                                                
-- stdout --
	multinode-483000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 17:56:28.015762    3631 out.go:291] Setting OutFile to fd 1 ...
	I0803 17:56:28.015913    3631 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 17:56:28.015916    3631 out.go:304] Setting ErrFile to fd 2...
	I0803 17:56:28.015918    3631 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 17:56:28.016046    3631 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1166/.minikube/bin
	I0803 17:56:28.016158    3631 out.go:298] Setting JSON to false
	I0803 17:56:28.016167    3631 mustload.go:65] Loading cluster: multinode-483000
	I0803 17:56:28.016220    3631 notify.go:220] Checking for updates...
	I0803 17:56:28.016346    3631 config.go:182] Loaded profile config "multinode-483000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 17:56:28.016352    3631 status.go:255] checking status of multinode-483000 ...
	I0803 17:56:28.016572    3631 status.go:330] multinode-483000 host status = "Stopped" (err=<nil>)
	I0803 17:56:28.016575    3631 status.go:343] host is not running, skipping remaining checks
	I0803 17:56:28.016577    3631 status.go:257] multinode-483000 status: &{Name:multinode-483000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-483000 status --alsologtostderr": multinode-483000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-483000 -n multinode-483000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-483000 -n multinode-483000: exit status 7 (29.6005ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-483000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.13s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (57.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-483000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-483000 node start m03 -v=7 --alsologtostderr: exit status 85 (46.143541ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 17:56:28.075478    3635 out.go:291] Setting OutFile to fd 1 ...
	I0803 17:56:28.075708    3635 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 17:56:28.075712    3635 out.go:304] Setting ErrFile to fd 2...
	I0803 17:56:28.075714    3635 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 17:56:28.075846    3635 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1166/.minikube/bin
	I0803 17:56:28.076078    3635 mustload.go:65] Loading cluster: multinode-483000
	I0803 17:56:28.076245    3635 config.go:182] Loaded profile config "multinode-483000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 17:56:28.080761    3635 out.go:177] 
	W0803 17:56:28.083796    3635 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0803 17:56:28.083800    3635 out.go:239] * 
	* 
	W0803 17:56:28.085492    3635 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 17:56:28.088780    3635 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0803 17:56:28.075478    3635 out.go:291] Setting OutFile to fd 1 ...
I0803 17:56:28.075708    3635 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0803 17:56:28.075712    3635 out.go:304] Setting ErrFile to fd 2...
I0803 17:56:28.075714    3635 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0803 17:56:28.075846    3635 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1166/.minikube/bin
I0803 17:56:28.076078    3635 mustload.go:65] Loading cluster: multinode-483000
I0803 17:56:28.076245    3635 config.go:182] Loaded profile config "multinode-483000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0803 17:56:28.080761    3635 out.go:177] 
W0803 17:56:28.083796    3635 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0803 17:56:28.083800    3635 out.go:239] * 
* 
W0803 17:56:28.085492    3635 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0803 17:56:28.088780    3635 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-483000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-483000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-483000 status -v=7 --alsologtostderr: exit status 7 (29.441708ms)

                                                
                                                
-- stdout --
	multinode-483000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 17:56:28.121387    3637 out.go:291] Setting OutFile to fd 1 ...
	I0803 17:56:28.121524    3637 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 17:56:28.121528    3637 out.go:304] Setting ErrFile to fd 2...
	I0803 17:56:28.121530    3637 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 17:56:28.121664    3637 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1166/.minikube/bin
	I0803 17:56:28.121793    3637 out.go:298] Setting JSON to false
	I0803 17:56:28.121802    3637 mustload.go:65] Loading cluster: multinode-483000
	I0803 17:56:28.121857    3637 notify.go:220] Checking for updates...
	I0803 17:56:28.121995    3637 config.go:182] Loaded profile config "multinode-483000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 17:56:28.122001    3637 status.go:255] checking status of multinode-483000 ...
	I0803 17:56:28.122212    3637 status.go:330] multinode-483000 host status = "Stopped" (err=<nil>)
	I0803 17:56:28.122216    3637 status.go:343] host is not running, skipping remaining checks
	I0803 17:56:28.122218    3637 status.go:257] multinode-483000 status: &{Name:multinode-483000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-483000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-483000 status -v=7 --alsologtostderr: exit status 7 (72.122ms)

                                                
                                                
-- stdout --
	multinode-483000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 17:56:29.529787    3639 out.go:291] Setting OutFile to fd 1 ...
	I0803 17:56:29.529990    3639 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 17:56:29.529994    3639 out.go:304] Setting ErrFile to fd 2...
	I0803 17:56:29.529997    3639 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 17:56:29.530174    3639 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1166/.minikube/bin
	I0803 17:56:29.530335    3639 out.go:298] Setting JSON to false
	I0803 17:56:29.530346    3639 mustload.go:65] Loading cluster: multinode-483000
	I0803 17:56:29.530387    3639 notify.go:220] Checking for updates...
	I0803 17:56:29.530606    3639 config.go:182] Loaded profile config "multinode-483000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 17:56:29.530613    3639 status.go:255] checking status of multinode-483000 ...
	I0803 17:56:29.530882    3639 status.go:330] multinode-483000 host status = "Stopped" (err=<nil>)
	I0803 17:56:29.530886    3639 status.go:343] host is not running, skipping remaining checks
	I0803 17:56:29.530889    3639 status.go:257] multinode-483000 status: &{Name:multinode-483000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-483000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-483000 status -v=7 --alsologtostderr: exit status 7 (72.905ms)

                                                
                                                
-- stdout --
	multinode-483000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 17:56:31.316220    3643 out.go:291] Setting OutFile to fd 1 ...
	I0803 17:56:31.316449    3643 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 17:56:31.316454    3643 out.go:304] Setting ErrFile to fd 2...
	I0803 17:56:31.316458    3643 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 17:56:31.316658    3643 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1166/.minikube/bin
	I0803 17:56:31.316822    3643 out.go:298] Setting JSON to false
	I0803 17:56:31.316836    3643 mustload.go:65] Loading cluster: multinode-483000
	I0803 17:56:31.316881    3643 notify.go:220] Checking for updates...
	I0803 17:56:31.317140    3643 config.go:182] Loaded profile config "multinode-483000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 17:56:31.317148    3643 status.go:255] checking status of multinode-483000 ...
	I0803 17:56:31.317456    3643 status.go:330] multinode-483000 host status = "Stopped" (err=<nil>)
	I0803 17:56:31.317461    3643 status.go:343] host is not running, skipping remaining checks
	I0803 17:56:31.317464    3643 status.go:257] multinode-483000 status: &{Name:multinode-483000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-483000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-483000 status -v=7 --alsologtostderr: exit status 7 (74.108125ms)

                                                
                                                
-- stdout --
	multinode-483000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 17:56:34.309336    3645 out.go:291] Setting OutFile to fd 1 ...
	I0803 17:56:34.309527    3645 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 17:56:34.309531    3645 out.go:304] Setting ErrFile to fd 2...
	I0803 17:56:34.309535    3645 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 17:56:34.309715    3645 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1166/.minikube/bin
	I0803 17:56:34.309880    3645 out.go:298] Setting JSON to false
	I0803 17:56:34.309892    3645 mustload.go:65] Loading cluster: multinode-483000
	I0803 17:56:34.309921    3645 notify.go:220] Checking for updates...
	I0803 17:56:34.310124    3645 config.go:182] Loaded profile config "multinode-483000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 17:56:34.310132    3645 status.go:255] checking status of multinode-483000 ...
	I0803 17:56:34.310400    3645 status.go:330] multinode-483000 host status = "Stopped" (err=<nil>)
	I0803 17:56:34.310404    3645 status.go:343] host is not running, skipping remaining checks
	I0803 17:56:34.310407    3645 status.go:257] multinode-483000 status: &{Name:multinode-483000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-483000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-483000 status -v=7 --alsologtostderr: exit status 7 (71.381333ms)

                                                
                                                
-- stdout --
	multinode-483000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 17:56:37.952445    3647 out.go:291] Setting OutFile to fd 1 ...
	I0803 17:56:37.952655    3647 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 17:56:37.952659    3647 out.go:304] Setting ErrFile to fd 2...
	I0803 17:56:37.952662    3647 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 17:56:37.952840    3647 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1166/.minikube/bin
	I0803 17:56:37.952997    3647 out.go:298] Setting JSON to false
	I0803 17:56:37.953008    3647 mustload.go:65] Loading cluster: multinode-483000
	I0803 17:56:37.953041    3647 notify.go:220] Checking for updates...
	I0803 17:56:37.953283    3647 config.go:182] Loaded profile config "multinode-483000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 17:56:37.953290    3647 status.go:255] checking status of multinode-483000 ...
	I0803 17:56:37.953584    3647 status.go:330] multinode-483000 host status = "Stopped" (err=<nil>)
	I0803 17:56:37.953589    3647 status.go:343] host is not running, skipping remaining checks
	I0803 17:56:37.953592    3647 status.go:257] multinode-483000 status: &{Name:multinode-483000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-483000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-483000 status -v=7 --alsologtostderr: exit status 7 (71.858ms)

                                                
                                                
-- stdout --
	multinode-483000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 17:56:41.477083    3652 out.go:291] Setting OutFile to fd 1 ...
	I0803 17:56:41.477282    3652 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 17:56:41.477287    3652 out.go:304] Setting ErrFile to fd 2...
	I0803 17:56:41.477291    3652 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 17:56:41.477468    3652 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1166/.minikube/bin
	I0803 17:56:41.477643    3652 out.go:298] Setting JSON to false
	I0803 17:56:41.477656    3652 mustload.go:65] Loading cluster: multinode-483000
	I0803 17:56:41.477691    3652 notify.go:220] Checking for updates...
	I0803 17:56:41.477921    3652 config.go:182] Loaded profile config "multinode-483000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 17:56:41.477929    3652 status.go:255] checking status of multinode-483000 ...
	I0803 17:56:41.478249    3652 status.go:330] multinode-483000 host status = "Stopped" (err=<nil>)
	I0803 17:56:41.478254    3652 status.go:343] host is not running, skipping remaining checks
	I0803 17:56:41.478257    3652 status.go:257] multinode-483000 status: &{Name:multinode-483000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-483000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-483000 status -v=7 --alsologtostderr: exit status 7 (71.696166ms)

                                                
                                                
-- stdout --
	multinode-483000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 17:56:51.449938    3657 out.go:291] Setting OutFile to fd 1 ...
	I0803 17:56:51.450146    3657 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 17:56:51.450151    3657 out.go:304] Setting ErrFile to fd 2...
	I0803 17:56:51.450154    3657 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 17:56:51.450356    3657 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1166/.minikube/bin
	I0803 17:56:51.450548    3657 out.go:298] Setting JSON to false
	I0803 17:56:51.450560    3657 mustload.go:65] Loading cluster: multinode-483000
	I0803 17:56:51.450595    3657 notify.go:220] Checking for updates...
	I0803 17:56:51.450816    3657 config.go:182] Loaded profile config "multinode-483000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 17:56:51.450823    3657 status.go:255] checking status of multinode-483000 ...
	I0803 17:56:51.451094    3657 status.go:330] multinode-483000 host status = "Stopped" (err=<nil>)
	I0803 17:56:51.451099    3657 status.go:343] host is not running, skipping remaining checks
	I0803 17:56:51.451102    3657 status.go:257] multinode-483000 status: &{Name:multinode-483000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-483000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-483000 status -v=7 --alsologtostderr: exit status 7 (71.305875ms)

                                                
                                                
-- stdout --
	multinode-483000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 17:56:59.855583    3659 out.go:291] Setting OutFile to fd 1 ...
	I0803 17:56:59.855773    3659 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 17:56:59.855777    3659 out.go:304] Setting ErrFile to fd 2...
	I0803 17:56:59.855780    3659 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 17:56:59.855957    3659 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1166/.minikube/bin
	I0803 17:56:59.856145    3659 out.go:298] Setting JSON to false
	I0803 17:56:59.856155    3659 mustload.go:65] Loading cluster: multinode-483000
	I0803 17:56:59.856191    3659 notify.go:220] Checking for updates...
	I0803 17:56:59.856425    3659 config.go:182] Loaded profile config "multinode-483000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 17:56:59.856433    3659 status.go:255] checking status of multinode-483000 ...
	I0803 17:56:59.856725    3659 status.go:330] multinode-483000 host status = "Stopped" (err=<nil>)
	I0803 17:56:59.856730    3659 status.go:343] host is not running, skipping remaining checks
	I0803 17:56:59.856733    3659 status.go:257] multinode-483000 status: &{Name:multinode-483000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-483000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-483000 status -v=7 --alsologtostderr: exit status 7 (74.935417ms)

                                                
                                                
-- stdout --
	multinode-483000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 17:57:25.133257    3663 out.go:291] Setting OutFile to fd 1 ...
	I0803 17:57:25.133478    3663 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 17:57:25.133483    3663 out.go:304] Setting ErrFile to fd 2...
	I0803 17:57:25.133486    3663 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 17:57:25.133682    3663 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1166/.minikube/bin
	I0803 17:57:25.133895    3663 out.go:298] Setting JSON to false
	I0803 17:57:25.133909    3663 mustload.go:65] Loading cluster: multinode-483000
	I0803 17:57:25.133949    3663 notify.go:220] Checking for updates...
	I0803 17:57:25.134190    3663 config.go:182] Loaded profile config "multinode-483000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 17:57:25.134197    3663 status.go:255] checking status of multinode-483000 ...
	I0803 17:57:25.134509    3663 status.go:330] multinode-483000 host status = "Stopped" (err=<nil>)
	I0803 17:57:25.134514    3663 status.go:343] host is not running, skipping remaining checks
	I0803 17:57:25.134517    3663 status.go:257] multinode-483000 status: &{Name:multinode-483000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-483000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-483000 -n multinode-483000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-483000 -n multinode-483000: exit status 7 (34.4725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-483000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (57.12s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (9.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-483000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-483000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-483000: (3.832263542s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-483000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-483000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.221566833s)

                                                
                                                
-- stdout --
	* [multinode-483000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-483000" primary control-plane node in "multinode-483000" cluster
	* Restarting existing qemu2 VM for "multinode-483000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-483000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 17:57:29.093322    3689 out.go:291] Setting OutFile to fd 1 ...
	I0803 17:57:29.093480    3689 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 17:57:29.093484    3689 out.go:304] Setting ErrFile to fd 2...
	I0803 17:57:29.093487    3689 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 17:57:29.093657    3689 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1166/.minikube/bin
	I0803 17:57:29.094851    3689 out.go:298] Setting JSON to false
	I0803 17:57:29.114540    3689 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3413,"bootTime":1722729636,"procs":452,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 17:57:29.114614    3689 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 17:57:29.118882    3689 out.go:177] * [multinode-483000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0803 17:57:29.125868    3689 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 17:57:29.125909    3689 notify.go:220] Checking for updates...
	I0803 17:57:29.132822    3689 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	I0803 17:57:29.135813    3689 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 17:57:29.138752    3689 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 17:57:29.141807    3689 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	I0803 17:57:29.144834    3689 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 17:57:29.148137    3689 config.go:182] Loaded profile config "multinode-483000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 17:57:29.148200    3689 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 17:57:29.152764    3689 out.go:177] * Using the qemu2 driver based on existing profile
	I0803 17:57:29.159803    3689 start.go:297] selected driver: qemu2
	I0803 17:57:29.159810    3689 start.go:901] validating driver "qemu2" against &{Name:multinode-483000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:multinode-483000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 17:57:29.159862    3689 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 17:57:29.162391    3689 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0803 17:57:29.162434    3689 cni.go:84] Creating CNI manager for ""
	I0803 17:57:29.162440    3689 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0803 17:57:29.162484    3689 start.go:340] cluster config:
	{Name:multinode-483000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-483000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 17:57:29.166269    3689 iso.go:125] acquiring lock: {Name:mk5fb439c48468a4ea1254631a9a137d32075713 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 17:57:29.173820    3689 out.go:177] * Starting "multinode-483000" primary control-plane node in "multinode-483000" cluster
	I0803 17:57:29.177868    3689 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0803 17:57:29.177883    3689 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0803 17:57:29.177900    3689 cache.go:56] Caching tarball of preloaded images
	I0803 17:57:29.177966    3689 preload.go:172] Found /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0803 17:57:29.177972    3689 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0803 17:57:29.178026    3689 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/multinode-483000/config.json ...
	I0803 17:57:29.178461    3689 start.go:360] acquireMachinesLock for multinode-483000: {Name:mk8118ba0191912f8ddd157ebc629e1eb9051406 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 17:57:29.178498    3689 start.go:364] duration metric: took 30.792µs to acquireMachinesLock for "multinode-483000"
	I0803 17:57:29.178508    3689 start.go:96] Skipping create...Using existing machine configuration
	I0803 17:57:29.178515    3689 fix.go:54] fixHost starting: 
	I0803 17:57:29.178650    3689 fix.go:112] recreateIfNeeded on multinode-483000: state=Stopped err=<nil>
	W0803 17:57:29.178658    3689 fix.go:138] unexpected machine state, will restart: <nil>
	I0803 17:57:29.186779    3689 out.go:177] * Restarting existing qemu2 VM for "multinode-483000" ...
	I0803 17:57:29.190791    3689 qemu.go:418] Using hvf for hardware acceleration
	I0803 17:57:29.190835    3689 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/multinode-483000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/multinode-483000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/multinode-483000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:75:91:eb:21:b5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/multinode-483000/disk.qcow2
	I0803 17:57:29.193022    3689 main.go:141] libmachine: STDOUT: 
	I0803 17:57:29.193044    3689 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 17:57:29.193071    3689 fix.go:56] duration metric: took 14.556583ms for fixHost
	I0803 17:57:29.193077    3689 start.go:83] releasing machines lock for "multinode-483000", held for 14.574167ms
	W0803 17:57:29.193084    3689 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 17:57:29.193120    3689 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 17:57:29.193125    3689 start.go:729] Will try again in 5 seconds ...
	I0803 17:57:34.195172    3689 start.go:360] acquireMachinesLock for multinode-483000: {Name:mk8118ba0191912f8ddd157ebc629e1eb9051406 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 17:57:34.195586    3689 start.go:364] duration metric: took 331.25µs to acquireMachinesLock for "multinode-483000"
	I0803 17:57:34.195711    3689 start.go:96] Skipping create...Using existing machine configuration
	I0803 17:57:34.195730    3689 fix.go:54] fixHost starting: 
	I0803 17:57:34.196469    3689 fix.go:112] recreateIfNeeded on multinode-483000: state=Stopped err=<nil>
	W0803 17:57:34.196496    3689 fix.go:138] unexpected machine state, will restart: <nil>
	I0803 17:57:34.202012    3689 out.go:177] * Restarting existing qemu2 VM for "multinode-483000" ...
	I0803 17:57:34.209935    3689 qemu.go:418] Using hvf for hardware acceleration
	I0803 17:57:34.210183    3689 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/multinode-483000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/multinode-483000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/multinode-483000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:75:91:eb:21:b5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/multinode-483000/disk.qcow2
	I0803 17:57:34.219067    3689 main.go:141] libmachine: STDOUT: 
	I0803 17:57:34.219129    3689 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 17:57:34.219201    3689 fix.go:56] duration metric: took 23.470416ms for fixHost
	I0803 17:57:34.219220    3689 start.go:83] releasing machines lock for "multinode-483000", held for 23.610458ms
	W0803 17:57:34.219392    3689 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-483000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-483000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 17:57:34.226948    3689 out.go:177] 
	W0803 17:57:34.229930    3689 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 17:57:34.229955    3689 out.go:239] * 
	* 
	W0803 17:57:34.232399    3689 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 17:57:34.239947    3689 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-483000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-483000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-483000 -n multinode-483000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-483000 -n multinode-483000: exit status 7 (33.326625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-483000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (9.19s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-483000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-483000 node delete m03: exit status 83 (45.9915ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-483000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-483000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-483000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-483000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-483000 status --alsologtostderr: exit status 7 (29.60175ms)

                                                
                                                
-- stdout --
	multinode-483000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 17:57:34.431196    3704 out.go:291] Setting OutFile to fd 1 ...
	I0803 17:57:34.431362    3704 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 17:57:34.431365    3704 out.go:304] Setting ErrFile to fd 2...
	I0803 17:57:34.431368    3704 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 17:57:34.431507    3704 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1166/.minikube/bin
	I0803 17:57:34.431631    3704 out.go:298] Setting JSON to false
	I0803 17:57:34.431639    3704 mustload.go:65] Loading cluster: multinode-483000
	I0803 17:57:34.431711    3704 notify.go:220] Checking for updates...
	I0803 17:57:34.431821    3704 config.go:182] Loaded profile config "multinode-483000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 17:57:34.431827    3704 status.go:255] checking status of multinode-483000 ...
	I0803 17:57:34.432026    3704 status.go:330] multinode-483000 host status = "Stopped" (err=<nil>)
	I0803 17:57:34.432030    3704 status.go:343] host is not running, skipping remaining checks
	I0803 17:57:34.432033    3704 status.go:257] multinode-483000 status: &{Name:multinode-483000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-483000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-483000 -n multinode-483000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-483000 -n multinode-483000: exit status 7 (28.904334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-483000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-483000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-483000 stop: (3.154885125s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-483000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-483000 status: exit status 7 (65.736917ms)

                                                
                                                
-- stdout --
	multinode-483000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-483000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-483000 status --alsologtostderr: exit status 7 (31.796583ms)

                                                
                                                
-- stdout --
	multinode-483000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 17:57:37.713077    3731 out.go:291] Setting OutFile to fd 1 ...
	I0803 17:57:37.713211    3731 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 17:57:37.713218    3731 out.go:304] Setting ErrFile to fd 2...
	I0803 17:57:37.713220    3731 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 17:57:37.713359    3731 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1166/.minikube/bin
	I0803 17:57:37.713484    3731 out.go:298] Setting JSON to false
	I0803 17:57:37.713493    3731 mustload.go:65] Loading cluster: multinode-483000
	I0803 17:57:37.713559    3731 notify.go:220] Checking for updates...
	I0803 17:57:37.713688    3731 config.go:182] Loaded profile config "multinode-483000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 17:57:37.713694    3731 status.go:255] checking status of multinode-483000 ...
	I0803 17:57:37.713905    3731 status.go:330] multinode-483000 host status = "Stopped" (err=<nil>)
	I0803 17:57:37.713908    3731 status.go:343] host is not running, skipping remaining checks
	I0803 17:57:37.713910    3731 status.go:257] multinode-483000 status: &{Name:multinode-483000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-483000 status --alsologtostderr": multinode-483000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-483000 status --alsologtostderr": multinode-483000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-483000 -n multinode-483000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-483000 -n multinode-483000: exit status 7 (29.396166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-483000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.28s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-483000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-483000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.180956125s)

                                                
                                                
-- stdout --
	* [multinode-483000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-483000" primary control-plane node in "multinode-483000" cluster
	* Restarting existing qemu2 VM for "multinode-483000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-483000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 17:57:37.771170    3735 out.go:291] Setting OutFile to fd 1 ...
	I0803 17:57:37.771292    3735 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 17:57:37.771296    3735 out.go:304] Setting ErrFile to fd 2...
	I0803 17:57:37.771298    3735 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 17:57:37.771412    3735 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1166/.minikube/bin
	I0803 17:57:37.772373    3735 out.go:298] Setting JSON to false
	I0803 17:57:37.788523    3735 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3421,"bootTime":1722729636,"procs":454,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 17:57:37.788624    3735 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 17:57:37.793814    3735 out.go:177] * [multinode-483000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0803 17:57:37.801687    3735 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 17:57:37.801761    3735 notify.go:220] Checking for updates...
	I0803 17:57:37.808738    3735 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	I0803 17:57:37.811672    3735 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 17:57:37.814686    3735 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 17:57:37.817734    3735 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	I0803 17:57:37.820686    3735 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 17:57:37.823907    3735 config.go:182] Loaded profile config "multinode-483000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 17:57:37.824171    3735 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 17:57:37.827652    3735 out.go:177] * Using the qemu2 driver based on existing profile
	I0803 17:57:37.834691    3735 start.go:297] selected driver: qemu2
	I0803 17:57:37.834697    3735 start.go:901] validating driver "qemu2" against &{Name:multinode-483000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:multinode-483000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 17:57:37.834758    3735 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 17:57:37.837083    3735 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0803 17:57:37.837104    3735 cni.go:84] Creating CNI manager for ""
	I0803 17:57:37.837109    3735 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0803 17:57:37.837150    3735 start.go:340] cluster config:
	{Name:multinode-483000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-483000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 17:57:37.840745    3735 iso.go:125] acquiring lock: {Name:mk5fb439c48468a4ea1254631a9a137d32075713 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 17:57:37.846658    3735 out.go:177] * Starting "multinode-483000" primary control-plane node in "multinode-483000" cluster
	I0803 17:57:37.850731    3735 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0803 17:57:37.850748    3735 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0803 17:57:37.850760    3735 cache.go:56] Caching tarball of preloaded images
	I0803 17:57:37.850820    3735 preload.go:172] Found /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0803 17:57:37.850825    3735 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0803 17:57:37.850899    3735 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/multinode-483000/config.json ...
	I0803 17:57:37.851318    3735 start.go:360] acquireMachinesLock for multinode-483000: {Name:mk8118ba0191912f8ddd157ebc629e1eb9051406 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 17:57:37.851347    3735 start.go:364] duration metric: took 22.834µs to acquireMachinesLock for "multinode-483000"
	I0803 17:57:37.851355    3735 start.go:96] Skipping create...Using existing machine configuration
	I0803 17:57:37.851361    3735 fix.go:54] fixHost starting: 
	I0803 17:57:37.851474    3735 fix.go:112] recreateIfNeeded on multinode-483000: state=Stopped err=<nil>
	W0803 17:57:37.851484    3735 fix.go:138] unexpected machine state, will restart: <nil>
	I0803 17:57:37.855663    3735 out.go:177] * Restarting existing qemu2 VM for "multinode-483000" ...
	I0803 17:57:37.863550    3735 qemu.go:418] Using hvf for hardware acceleration
	I0803 17:57:37.863585    3735 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/multinode-483000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/multinode-483000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/multinode-483000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:75:91:eb:21:b5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/multinode-483000/disk.qcow2
	I0803 17:57:37.865467    3735 main.go:141] libmachine: STDOUT: 
	I0803 17:57:37.865486    3735 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 17:57:37.865513    3735 fix.go:56] duration metric: took 14.153083ms for fixHost
	I0803 17:57:37.865518    3735 start.go:83] releasing machines lock for "multinode-483000", held for 14.167792ms
	W0803 17:57:37.865526    3735 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 17:57:37.865563    3735 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 17:57:37.865568    3735 start.go:729] Will try again in 5 seconds ...
	I0803 17:57:42.867585    3735 start.go:360] acquireMachinesLock for multinode-483000: {Name:mk8118ba0191912f8ddd157ebc629e1eb9051406 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 17:57:42.867986    3735 start.go:364] duration metric: took 321.959µs to acquireMachinesLock for "multinode-483000"
	I0803 17:57:42.868104    3735 start.go:96] Skipping create...Using existing machine configuration
	I0803 17:57:42.868125    3735 fix.go:54] fixHost starting: 
	I0803 17:57:42.868795    3735 fix.go:112] recreateIfNeeded on multinode-483000: state=Stopped err=<nil>
	W0803 17:57:42.868822    3735 fix.go:138] unexpected machine state, will restart: <nil>
	I0803 17:57:42.873271    3735 out.go:177] * Restarting existing qemu2 VM for "multinode-483000" ...
	I0803 17:57:42.881096    3735 qemu.go:418] Using hvf for hardware acceleration
	I0803 17:57:42.881290    3735 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/multinode-483000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/multinode-483000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/multinode-483000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:75:91:eb:21:b5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/multinode-483000/disk.qcow2
	I0803 17:57:42.890167    3735 main.go:141] libmachine: STDOUT: 
	I0803 17:57:42.890597    3735 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 17:57:42.890708    3735 fix.go:56] duration metric: took 22.582583ms for fixHost
	I0803 17:57:42.890729    3735 start.go:83] releasing machines lock for "multinode-483000", held for 22.72075ms
	W0803 17:57:42.890858    3735 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-483000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-483000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 17:57:42.897179    3735 out.go:177] 
	W0803 17:57:42.901135    3735 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 17:57:42.901159    3735 out.go:239] * 
	* 
	W0803 17:57:42.903711    3735 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 17:57:42.912032    3735 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-483000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-483000 -n multinode-483000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-483000 -n multinode-483000: exit status 7 (68.153083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-483000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-483000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-483000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-483000-m01 --driver=qemu2 : exit status 80 (10.044580709s)

                                                
                                                
-- stdout --
	* [multinode-483000-m01] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-483000-m01" primary control-plane node in "multinode-483000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-483000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-483000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-483000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-483000-m02 --driver=qemu2 : exit status 80 (9.978765917s)

                                                
                                                
-- stdout --
	* [multinode-483000-m02] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-483000-m02" primary control-plane node in "multinode-483000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-483000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-483000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-483000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-483000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-483000: exit status 83 (78.991792ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-483000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-483000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-483000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-483000 -n multinode-483000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-483000 -n multinode-483000: exit status 7 (29.760083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-483000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.25s)

                                                
                                    
x
+
TestPreload (10.05s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-360000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-360000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.909287042s)

                                                
                                                
-- stdout --
	* [test-preload-360000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-360000" primary control-plane node in "test-preload-360000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-360000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 17:58:03.373777    3790 out.go:291] Setting OutFile to fd 1 ...
	I0803 17:58:03.373900    3790 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 17:58:03.373903    3790 out.go:304] Setting ErrFile to fd 2...
	I0803 17:58:03.373905    3790 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 17:58:03.374033    3790 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1166/.minikube/bin
	I0803 17:58:03.375037    3790 out.go:298] Setting JSON to false
	I0803 17:58:03.390999    3790 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3447,"bootTime":1722729636,"procs":451,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 17:58:03.391058    3790 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 17:58:03.396448    3790 out.go:177] * [test-preload-360000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0803 17:58:03.404512    3790 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 17:58:03.404581    3790 notify.go:220] Checking for updates...
	I0803 17:58:03.411445    3790 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	I0803 17:58:03.414461    3790 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 17:58:03.417368    3790 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 17:58:03.420423    3790 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	I0803 17:58:03.423457    3790 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 17:58:03.426797    3790 config.go:182] Loaded profile config "multinode-483000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 17:58:03.426860    3790 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 17:58:03.431435    3790 out.go:177] * Using the qemu2 driver based on user configuration
	I0803 17:58:03.437404    3790 start.go:297] selected driver: qemu2
	I0803 17:58:03.437409    3790 start.go:901] validating driver "qemu2" against <nil>
	I0803 17:58:03.437415    3790 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 17:58:03.439680    3790 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0803 17:58:03.442456    3790 out.go:177] * Automatically selected the socket_vmnet network
	I0803 17:58:03.445526    3790 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0803 17:58:03.445570    3790 cni.go:84] Creating CNI manager for ""
	I0803 17:58:03.445577    3790 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0803 17:58:03.445582    3790 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0803 17:58:03.445622    3790 start.go:340] cluster config:
	{Name:test-preload-360000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-360000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 17:58:03.449404    3790 iso.go:125] acquiring lock: {Name:mk5fb439c48468a4ea1254631a9a137d32075713 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 17:58:03.456435    3790 out.go:177] * Starting "test-preload-360000" primary control-plane node in "test-preload-360000" cluster
	I0803 17:58:03.460421    3790 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0803 17:58:03.460485    3790 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/test-preload-360000/config.json ...
	I0803 17:58:03.460499    3790 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/test-preload-360000/config.json: {Name:mkaada95adb023c28b5219999f19a471761b2af0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 17:58:03.460496    3790 cache.go:107] acquiring lock: {Name:mk454d502bb00fe9f5578b8ccf966bf1c1c667d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 17:58:03.460495    3790 cache.go:107] acquiring lock: {Name:mk7ee06c1c4c453edea334426fe6b259faa5bde2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 17:58:03.460533    3790 cache.go:107] acquiring lock: {Name:mk475beddf2c78c84876ce3fa3508478ddb893ea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 17:58:03.460650    3790 cache.go:107] acquiring lock: {Name:mk84f3af2eddc7d07ba4e8a7d4bc453721217957 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 17:58:03.460706    3790 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0803 17:58:03.460714    3790 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0803 17:58:03.460735    3790 cache.go:107] acquiring lock: {Name:mkf50aeda6cb567924f35f07335ea929a0b88bd9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 17:58:03.460780    3790 start.go:360] acquireMachinesLock for test-preload-360000: {Name:mk8118ba0191912f8ddd157ebc629e1eb9051406 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 17:58:03.460799    3790 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0803 17:58:03.460818    3790 start.go:364] duration metric: took 28.25µs to acquireMachinesLock for "test-preload-360000"
	I0803 17:58:03.460850    3790 cache.go:107] acquiring lock: {Name:mk025caa8ac1bce9f00ae8e84036815b29a1ba97 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 17:58:03.460831    3790 start.go:93] Provisioning new machine with config: &{Name:test-preload-360000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-360000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 17:58:03.460872    3790 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 17:58:03.460822    3790 cache.go:107] acquiring lock: {Name:mk9ece2459f33c0671819059efb8bdb76ef19e35 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 17:58:03.460906    3790 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0803 17:58:03.460879    3790 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0803 17:58:03.460920    3790 cache.go:107] acquiring lock: {Name:mk5007f118515dcf9fcf08cec06d100c9d81521e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 17:58:03.461306    3790 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0803 17:58:03.465492    3790 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0803 17:58:03.466177    3790 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0803 17:58:03.466169    3790 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0803 17:58:03.472343    3790 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0803 17:58:03.473093    3790 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0803 17:58:03.473109    3790 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0803 17:58:03.473174    3790 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0803 17:58:03.475112    3790 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0803 17:58:03.475142    3790 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0803 17:58:03.475172    3790 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0803 17:58:03.475198    3790 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0803 17:58:03.483681    3790 start.go:159] libmachine.API.Create for "test-preload-360000" (driver="qemu2")
	I0803 17:58:03.483701    3790 client.go:168] LocalClient.Create starting
	I0803 17:58:03.483783    3790 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/ca.pem
	I0803 17:58:03.483814    3790 main.go:141] libmachine: Decoding PEM data...
	I0803 17:58:03.483824    3790 main.go:141] libmachine: Parsing certificate...
	I0803 17:58:03.483867    3790 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/cert.pem
	I0803 17:58:03.483892    3790 main.go:141] libmachine: Decoding PEM data...
	I0803 17:58:03.483903    3790 main.go:141] libmachine: Parsing certificate...
	I0803 17:58:03.484258    3790 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1166/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 17:58:03.642386    3790 main.go:141] libmachine: Creating SSH key...
	I0803 17:58:03.763839    3790 main.go:141] libmachine: Creating Disk image...
	I0803 17:58:03.763861    3790 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 17:58:03.764083    3790 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/test-preload-360000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/test-preload-360000/disk.qcow2
	I0803 17:58:03.773640    3790 main.go:141] libmachine: STDOUT: 
	I0803 17:58:03.773662    3790 main.go:141] libmachine: STDERR: 
	I0803 17:58:03.773707    3790 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/test-preload-360000/disk.qcow2 +20000M
	I0803 17:58:03.783027    3790 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 17:58:03.783044    3790 main.go:141] libmachine: STDERR: 
	I0803 17:58:03.783061    3790 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/test-preload-360000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/test-preload-360000/disk.qcow2
	I0803 17:58:03.783065    3790 main.go:141] libmachine: Starting QEMU VM...
	I0803 17:58:03.783074    3790 qemu.go:418] Using hvf for hardware acceleration
	I0803 17:58:03.783099    3790 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/test-preload-360000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/test-preload-360000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/test-preload-360000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:b5:1f:d4:70:c5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/test-preload-360000/disk.qcow2
	I0803 17:58:03.784838    3790 main.go:141] libmachine: STDOUT: 
	I0803 17:58:03.784858    3790 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 17:58:03.784875    3790 client.go:171] duration metric: took 301.179208ms to LocalClient.Create
	I0803 17:58:04.016114    3790 cache.go:162] opening:  /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0803 17:58:04.022972    3790 cache.go:162] opening:  /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0803 17:58:04.024174    3790 cache.go:162] opening:  /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0803 17:58:04.025457    3790 cache.go:162] opening:  /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0803 17:58:04.030972    3790 cache.go:162] opening:  /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	W0803 17:58:04.058067    3790 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0803 17:58:04.058102    3790 cache.go:162] opening:  /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0803 17:58:04.086595    3790 cache.go:162] opening:  /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0803 17:58:04.160577    3790 cache.go:157] /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0803 17:58:04.160624    3790 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 699.999542ms
	I0803 17:58:04.160655    3790 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0803 17:58:04.257312    3790 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0803 17:58:04.257394    3790 cache.go:162] opening:  /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0803 17:58:04.494480    3790 cache.go:157] /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0803 17:58:04.494547    3790 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.034078292s
	I0803 17:58:04.494571    3790 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0803 17:58:05.785050    3790 start.go:128] duration metric: took 2.324229125s to createHost
	I0803 17:58:05.785097    3790 start.go:83] releasing machines lock for "test-preload-360000", held for 2.324340583s
	W0803 17:58:05.785162    3790 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 17:58:05.797076    3790 out.go:177] * Deleting "test-preload-360000" in qemu2 ...
	W0803 17:58:05.830418    3790 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 17:58:05.830449    3790 start.go:729] Will try again in 5 seconds ...
	I0803 17:58:06.309137    3790 cache.go:157] /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0803 17:58:06.309212    3790 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 2.848560958s
	I0803 17:58:06.309244    3790 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0803 17:58:06.322908    3790 cache.go:157] /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0803 17:58:06.322946    3790 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 2.862211125s
	I0803 17:58:06.322967    3790 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0803 17:58:06.828633    3790 cache.go:157] /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0803 17:58:06.828674    3790 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 3.368260292s
	I0803 17:58:06.828700    3790 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0803 17:58:07.370708    3790 cache.go:157] /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0803 17:58:07.370757    3790 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 3.910383208s
	I0803 17:58:07.370788    3790 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0803 17:58:09.683309    3790 cache.go:157] /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0803 17:58:09.683353    3790 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 6.222676916s
	I0803 17:58:09.683377    3790 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0803 17:58:10.832446    3790 start.go:360] acquireMachinesLock for test-preload-360000: {Name:mk8118ba0191912f8ddd157ebc629e1eb9051406 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 17:58:10.832837    3790 start.go:364] duration metric: took 310.958µs to acquireMachinesLock for "test-preload-360000"
	I0803 17:58:10.832957    3790 start.go:93] Provisioning new machine with config: &{Name:test-preload-360000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-360000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 17:58:10.833217    3790 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 17:58:10.842863    3790 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0803 17:58:10.894158    3790 start.go:159] libmachine.API.Create for "test-preload-360000" (driver="qemu2")
	I0803 17:58:10.894211    3790 client.go:168] LocalClient.Create starting
	I0803 17:58:10.894325    3790 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/ca.pem
	I0803 17:58:10.894391    3790 main.go:141] libmachine: Decoding PEM data...
	I0803 17:58:10.894435    3790 main.go:141] libmachine: Parsing certificate...
	I0803 17:58:10.894513    3790 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/cert.pem
	I0803 17:58:10.894558    3790 main.go:141] libmachine: Decoding PEM data...
	I0803 17:58:10.894579    3790 main.go:141] libmachine: Parsing certificate...
	I0803 17:58:10.895074    3790 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1166/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 17:58:11.062647    3790 main.go:141] libmachine: Creating SSH key...
	I0803 17:58:11.183382    3790 main.go:141] libmachine: Creating Disk image...
	I0803 17:58:11.183388    3790 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 17:58:11.183583    3790 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/test-preload-360000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/test-preload-360000/disk.qcow2
	I0803 17:58:11.193114    3790 main.go:141] libmachine: STDOUT: 
	I0803 17:58:11.193153    3790 main.go:141] libmachine: STDERR: 
	I0803 17:58:11.193225    3790 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/test-preload-360000/disk.qcow2 +20000M
	I0803 17:58:11.201487    3790 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 17:58:11.201502    3790 main.go:141] libmachine: STDERR: 
	I0803 17:58:11.201514    3790 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/test-preload-360000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/test-preload-360000/disk.qcow2
	I0803 17:58:11.201520    3790 main.go:141] libmachine: Starting QEMU VM...
	I0803 17:58:11.201529    3790 qemu.go:418] Using hvf for hardware acceleration
	I0803 17:58:11.201563    3790 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/test-preload-360000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/test-preload-360000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/test-preload-360000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:d1:72:fb:66:8a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/test-preload-360000/disk.qcow2
	I0803 17:58:11.203351    3790 main.go:141] libmachine: STDOUT: 
	I0803 17:58:11.203374    3790 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 17:58:11.203387    3790 client.go:171] duration metric: took 309.182417ms to LocalClient.Create
	I0803 17:58:11.601556    3790 cache.go:157] /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0803 17:58:11.601628    3790 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 8.141080541s
	I0803 17:58:11.601654    3790 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0803 17:58:11.601701    3790 cache.go:87] Successfully saved all images to host disk.
	I0803 17:58:13.205623    3790 start.go:128] duration metric: took 2.372417833s to createHost
	I0803 17:58:13.205707    3790 start.go:83] releasing machines lock for "test-preload-360000", held for 2.372917125s
	W0803 17:58:13.206105    3790 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-360000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-360000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 17:58:13.220690    3790 out.go:177] 
	W0803 17:58:13.224678    3790 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 17:58:13.224703    3790 out.go:239] * 
	* 
	W0803 17:58:13.227315    3790 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 17:58:13.240573    3790 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-360000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-08-03 17:58:13.258519 -0700 PDT m=+2298.497221126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-360000 -n test-preload-360000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-360000 -n test-preload-360000: exit status 7 (65.497125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-360000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-360000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-360000
--- FAIL: TestPreload (10.05s)

                                                
                                    
x
+
TestScheduledStopUnix (10.44s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-176000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-176000 --memory=2048 --driver=qemu2 : exit status 80 (10.291107208s)

                                                
                                                
-- stdout --
	* [scheduled-stop-176000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-176000" primary control-plane node in "scheduled-stop-176000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-176000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-176000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-176000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-176000" primary control-plane node in "scheduled-stop-176000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-176000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-176000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-08-03 17:58:23.690641 -0700 PDT m=+2308.929670334
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-176000 -n scheduled-stop-176000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-176000 -n scheduled-stop-176000: exit status 7 (70.747208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-176000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-176000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-176000
--- FAIL: TestScheduledStopUnix (10.44s)

                                                
                                    
x
+
TestSkaffold (12.21s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe3687966270 version
skaffold_test.go:59: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe3687966270 version: (1.071319s)
skaffold_test.go:63: skaffold version: v2.13.1
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-623000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-623000 --memory=2600 --driver=qemu2 : exit status 80 (9.886267042s)

                                                
                                                
-- stdout --
	* [skaffold-623000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-623000" primary control-plane node in "skaffold-623000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-623000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-623000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-623000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-623000" primary control-plane node in "skaffold-623000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-623000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-623000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-08-03 17:58:35.907105 -0700 PDT m=+2321.146516668
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-623000 -n skaffold-623000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-623000 -n skaffold-623000: exit status 7 (63.701625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-623000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-623000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-623000
--- FAIL: TestSkaffold (12.21s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (595.56s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3041419756 start -p running-upgrade-359000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3041419756 start -p running-upgrade-359000 --memory=2200 --vm-driver=qemu2 : (52.038827292s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-359000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0803 18:01:23.562757    1673 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/functional-959000/client.crt: no such file or directory
E0803 18:02:08.807026    1673 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/addons-989000/client.crt: no such file or directory
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-359000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m28.469625583s)

                                                
                                                
-- stdout --
	* [running-upgrade-359000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-359000" primary control-plane node in "running-upgrade-359000" cluster
	* Updating the running qemu2 "running-upgrade-359000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 18:00:10.781450    4477 out.go:291] Setting OutFile to fd 1 ...
	I0803 18:00:10.781590    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:00:10.781594    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:00:10.781596    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:00:10.781738    4477 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1166/.minikube/bin
	I0803 18:00:10.782822    4477 out.go:298] Setting JSON to false
	I0803 18:00:10.798980    4477 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3574,"bootTime":1722729636,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 18:00:10.799088    4477 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 18:00:10.804591    4477 out.go:177] * [running-upgrade-359000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0803 18:00:10.811611    4477 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 18:00:10.811644    4477 notify.go:220] Checking for updates...
	I0803 18:00:10.817527    4477 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	I0803 18:00:10.821566    4477 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 18:00:10.824517    4477 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 18:00:10.827530    4477 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	I0803 18:00:10.830594    4477 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 18:00:10.832159    4477 config.go:182] Loaded profile config "running-upgrade-359000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0803 18:00:10.835461    4477 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0803 18:00:10.838578    4477 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 18:00:10.842378    4477 out.go:177] * Using the qemu2 driver based on existing profile
	I0803 18:00:10.849521    4477 start.go:297] selected driver: qemu2
	I0803 18:00:10.849527    4477 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-359000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50291 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-359000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0803 18:00:10.849581    4477 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 18:00:10.851934    4477 cni.go:84] Creating CNI manager for ""
	I0803 18:00:10.851950    4477 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0803 18:00:10.851981    4477 start.go:340] cluster config:
	{Name:running-upgrade-359000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50291 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-359000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0803 18:00:10.852030    4477 iso.go:125] acquiring lock: {Name:mk5fb439c48468a4ea1254631a9a137d32075713 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 18:00:10.859529    4477 out.go:177] * Starting "running-upgrade-359000" primary control-plane node in "running-upgrade-359000" cluster
	I0803 18:00:10.863563    4477 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0803 18:00:10.863580    4477 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0803 18:00:10.863592    4477 cache.go:56] Caching tarball of preloaded images
	I0803 18:00:10.863649    4477 preload.go:172] Found /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0803 18:00:10.863657    4477 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0803 18:00:10.863718    4477 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/running-upgrade-359000/config.json ...
	I0803 18:00:10.864154    4477 start.go:360] acquireMachinesLock for running-upgrade-359000: {Name:mk8118ba0191912f8ddd157ebc629e1eb9051406 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 18:00:10.864192    4477 start.go:364] duration metric: took 31.791µs to acquireMachinesLock for "running-upgrade-359000"
	I0803 18:00:10.864200    4477 start.go:96] Skipping create...Using existing machine configuration
	I0803 18:00:10.864205    4477 fix.go:54] fixHost starting: 
	I0803 18:00:10.864877    4477 fix.go:112] recreateIfNeeded on running-upgrade-359000: state=Running err=<nil>
	W0803 18:00:10.864885    4477 fix.go:138] unexpected machine state, will restart: <nil>
	I0803 18:00:10.869537    4477 out.go:177] * Updating the running qemu2 "running-upgrade-359000" VM ...
	I0803 18:00:10.877474    4477 machine.go:94] provisionDockerMachine start ...
	I0803 18:00:10.877510    4477 main.go:141] libmachine: Using SSH client type: native
	I0803 18:00:10.877630    4477 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x103216a10] 0x103219270 <nil>  [] 0s} localhost 50259 <nil> <nil>}
	I0803 18:00:10.877636    4477 main.go:141] libmachine: About to run SSH command:
	hostname
	I0803 18:00:10.949745    4477 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-359000
	
	I0803 18:00:10.949761    4477 buildroot.go:166] provisioning hostname "running-upgrade-359000"
	I0803 18:00:10.949811    4477 main.go:141] libmachine: Using SSH client type: native
	I0803 18:00:10.949934    4477 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x103216a10] 0x103219270 <nil>  [] 0s} localhost 50259 <nil> <nil>}
	I0803 18:00:10.949948    4477 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-359000 && echo "running-upgrade-359000" | sudo tee /etc/hostname
	I0803 18:00:11.021044    4477 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-359000
	
	I0803 18:00:11.021097    4477 main.go:141] libmachine: Using SSH client type: native
	I0803 18:00:11.021213    4477 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x103216a10] 0x103219270 <nil>  [] 0s} localhost 50259 <nil> <nil>}
	I0803 18:00:11.021224    4477 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-359000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-359000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-359000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0803 18:00:11.089469    4477 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0803 18:00:11.089485    4477 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19364-1166/.minikube CaCertPath:/Users/jenkins/minikube-integration/19364-1166/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19364-1166/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19364-1166/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19364-1166/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19364-1166/.minikube}
	I0803 18:00:11.089493    4477 buildroot.go:174] setting up certificates
	I0803 18:00:11.089497    4477 provision.go:84] configureAuth start
	I0803 18:00:11.089501    4477 provision.go:143] copyHostCerts
	I0803 18:00:11.089570    4477 exec_runner.go:144] found /Users/jenkins/minikube-integration/19364-1166/.minikube/ca.pem, removing ...
	I0803 18:00:11.089576    4477 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19364-1166/.minikube/ca.pem
	I0803 18:00:11.089718    4477 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19364-1166/.minikube/ca.pem (1082 bytes)
	I0803 18:00:11.089909    4477 exec_runner.go:144] found /Users/jenkins/minikube-integration/19364-1166/.minikube/cert.pem, removing ...
	I0803 18:00:11.089913    4477 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19364-1166/.minikube/cert.pem
	I0803 18:00:11.089956    4477 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19364-1166/.minikube/cert.pem (1123 bytes)
	I0803 18:00:11.090052    4477 exec_runner.go:144] found /Users/jenkins/minikube-integration/19364-1166/.minikube/key.pem, removing ...
	I0803 18:00:11.090055    4477 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19364-1166/.minikube/key.pem
	I0803 18:00:11.090098    4477 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19364-1166/.minikube/key.pem (1675 bytes)
	I0803 18:00:11.090175    4477 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19364-1166/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19364-1166/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-359000 san=[127.0.0.1 localhost minikube running-upgrade-359000]
	I0803 18:00:11.185535    4477 provision.go:177] copyRemoteCerts
	I0803 18:00:11.185567    4477 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0803 18:00:11.185573    4477 sshutil.go:53] new ssh client: &{IP:localhost Port:50259 SSHKeyPath:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/running-upgrade-359000/id_rsa Username:docker}
	I0803 18:00:11.221562    4477 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0803 18:00:11.228440    4477 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0803 18:00:11.235287    4477 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0803 18:00:11.242415    4477 provision.go:87] duration metric: took 152.9185ms to configureAuth
	I0803 18:00:11.242423    4477 buildroot.go:189] setting minikube options for container-runtime
	I0803 18:00:11.242524    4477 config.go:182] Loaded profile config "running-upgrade-359000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0803 18:00:11.242559    4477 main.go:141] libmachine: Using SSH client type: native
	I0803 18:00:11.242644    4477 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x103216a10] 0x103219270 <nil>  [] 0s} localhost 50259 <nil> <nil>}
	I0803 18:00:11.242652    4477 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0803 18:00:11.308605    4477 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0803 18:00:11.308614    4477 buildroot.go:70] root file system type: tmpfs
	I0803 18:00:11.308675    4477 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0803 18:00:11.308727    4477 main.go:141] libmachine: Using SSH client type: native
	I0803 18:00:11.308846    4477 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x103216a10] 0x103219270 <nil>  [] 0s} localhost 50259 <nil> <nil>}
	I0803 18:00:11.308884    4477 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0803 18:00:11.380765    4477 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0803 18:00:11.380821    4477 main.go:141] libmachine: Using SSH client type: native
	I0803 18:00:11.380940    4477 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x103216a10] 0x103219270 <nil>  [] 0s} localhost 50259 <nil> <nil>}
	I0803 18:00:11.380947    4477 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0803 18:00:11.450882    4477 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0803 18:00:11.450891    4477 machine.go:97] duration metric: took 573.429333ms to provisionDockerMachine
	I0803 18:00:11.450895    4477 start.go:293] postStartSetup for "running-upgrade-359000" (driver="qemu2")
	I0803 18:00:11.450902    4477 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0803 18:00:11.450947    4477 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0803 18:00:11.450956    4477 sshutil.go:53] new ssh client: &{IP:localhost Port:50259 SSHKeyPath:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/running-upgrade-359000/id_rsa Username:docker}
	I0803 18:00:11.486771    4477 ssh_runner.go:195] Run: cat /etc/os-release
	I0803 18:00:11.489362    4477 info.go:137] Remote host: Buildroot 2021.02.12
	I0803 18:00:11.489372    4477 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19364-1166/.minikube/addons for local assets ...
	I0803 18:00:11.489467    4477 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19364-1166/.minikube/files for local assets ...
	I0803 18:00:11.489560    4477 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19364-1166/.minikube/files/etc/ssl/certs/16732.pem -> 16732.pem in /etc/ssl/certs
	I0803 18:00:11.489665    4477 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0803 18:00:11.492199    4477 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1166/.minikube/files/etc/ssl/certs/16732.pem --> /etc/ssl/certs/16732.pem (1708 bytes)
	I0803 18:00:11.498864    4477 start.go:296] duration metric: took 47.964875ms for postStartSetup
	I0803 18:00:11.498878    4477 fix.go:56] duration metric: took 634.69375ms for fixHost
	I0803 18:00:11.498916    4477 main.go:141] libmachine: Using SSH client type: native
	I0803 18:00:11.499023    4477 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x103216a10] 0x103219270 <nil>  [] 0s} localhost 50259 <nil> <nil>}
	I0803 18:00:11.499027    4477 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0803 18:00:11.565974    4477 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722733211.443579972
	
	I0803 18:00:11.565982    4477 fix.go:216] guest clock: 1722733211.443579972
	I0803 18:00:11.565986    4477 fix.go:229] Guest: 2024-08-03 18:00:11.443579972 -0700 PDT Remote: 2024-08-03 18:00:11.498879 -0700 PDT m=+0.736475084 (delta=-55.299028ms)
	I0803 18:00:11.565999    4477 fix.go:200] guest clock delta is within tolerance: -55.299028ms
	I0803 18:00:11.566002    4477 start.go:83] releasing machines lock for "running-upgrade-359000", held for 701.827ms
	I0803 18:00:11.566065    4477 ssh_runner.go:195] Run: cat /version.json
	I0803 18:00:11.566074    4477 sshutil.go:53] new ssh client: &{IP:localhost Port:50259 SSHKeyPath:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/running-upgrade-359000/id_rsa Username:docker}
	I0803 18:00:11.566065    4477 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0803 18:00:11.566099    4477 sshutil.go:53] new ssh client: &{IP:localhost Port:50259 SSHKeyPath:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/running-upgrade-359000/id_rsa Username:docker}
	W0803 18:00:11.566630    4477 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50259: connect: connection refused
	I0803 18:00:11.566650    4477 retry.go:31] will retry after 313.126884ms: dial tcp [::1]:50259: connect: connection refused
	W0803 18:00:11.941457    4477 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0803 18:00:11.941588    4477 ssh_runner.go:195] Run: systemctl --version
	I0803 18:00:11.945442    4477 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0803 18:00:11.949054    4477 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0803 18:00:11.949123    4477 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0803 18:00:11.954767    4477 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0803 18:00:11.963323    4477 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0803 18:00:11.963336    4477 start.go:495] detecting cgroup driver to use...
	I0803 18:00:11.963455    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0803 18:00:11.971355    4477 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0803 18:00:11.975485    4477 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0803 18:00:11.979443    4477 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0803 18:00:11.979477    4477 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0803 18:00:11.983302    4477 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0803 18:00:11.987170    4477 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0803 18:00:11.990697    4477 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0803 18:00:11.993930    4477 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0803 18:00:11.996978    4477 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0803 18:00:11.999704    4477 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0803 18:00:12.002666    4477 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0803 18:00:12.005860    4477 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0803 18:00:12.009379    4477 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0803 18:00:12.011784    4477 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 18:00:12.101869    4477 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0803 18:00:12.112563    4477 start.go:495] detecting cgroup driver to use...
	I0803 18:00:12.112625    4477 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0803 18:00:12.117663    4477 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0803 18:00:12.122558    4477 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0803 18:00:12.133395    4477 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0803 18:00:12.137930    4477 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0803 18:00:12.142612    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0803 18:00:12.148183    4477 ssh_runner.go:195] Run: which cri-dockerd
	I0803 18:00:12.149428    4477 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0803 18:00:12.151880    4477 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0803 18:00:12.156785    4477 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0803 18:00:12.244910    4477 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0803 18:00:12.341567    4477 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0803 18:00:12.341624    4477 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0803 18:00:12.347074    4477 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 18:00:12.440130    4477 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0803 18:00:14.577800    4477 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.137722208s)
	I0803 18:00:14.577869    4477 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0803 18:00:14.582228    4477 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0803 18:00:14.588804    4477 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0803 18:00:14.593326    4477 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0803 18:00:14.674905    4477 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0803 18:00:14.754049    4477 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 18:00:14.831619    4477 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0803 18:00:14.838613    4477 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0803 18:00:14.843088    4477 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 18:00:14.909752    4477 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0803 18:00:14.948485    4477 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0803 18:00:14.948555    4477 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0803 18:00:14.950780    4477 start.go:563] Will wait 60s for crictl version
	I0803 18:00:14.950835    4477 ssh_runner.go:195] Run: which crictl
	I0803 18:00:14.952112    4477 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0803 18:00:14.963543    4477 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0803 18:00:14.963607    4477 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0803 18:00:14.976230    4477 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0803 18:00:15.000142    4477 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0803 18:00:15.000270    4477 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0803 18:00:15.001579    4477 kubeadm.go:883] updating cluster {Name:running-upgrade-359000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50291 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-359000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0803 18:00:15.001626    4477 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0803 18:00:15.001665    4477 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0803 18:00:15.011660    4477 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0803 18:00:15.011668    4477 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0803 18:00:15.011712    4477 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0803 18:00:15.015041    4477 ssh_runner.go:195] Run: which lz4
	I0803 18:00:15.016833    4477 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0803 18:00:15.018009    4477 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0803 18:00:15.018019    4477 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0803 18:00:15.954986    4477 docker.go:649] duration metric: took 938.2195ms to copy over tarball
	I0803 18:00:15.955040    4477 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0803 18:00:17.140543    4477 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.185524125s)
	I0803 18:00:17.140558    4477 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0803 18:00:17.156165    4477 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0803 18:00:17.159152    4477 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0803 18:00:17.164534    4477 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 18:00:17.246551    4477 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0803 18:00:17.462858    4477 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0803 18:00:17.477619    4477 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0803 18:00:17.477627    4477 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0803 18:00:17.477632    4477 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0803 18:00:17.483203    4477 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0803 18:00:17.484906    4477 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0803 18:00:17.486114    4477 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0803 18:00:17.486274    4477 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0803 18:00:17.487229    4477 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0803 18:00:17.487440    4477 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0803 18:00:17.488314    4477 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0803 18:00:17.489723    4477 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0803 18:00:17.489791    4477 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0803 18:00:17.491521    4477 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0803 18:00:17.491574    4477 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0803 18:00:17.491577    4477 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0803 18:00:17.492642    4477 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0803 18:00:17.492715    4477 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0803 18:00:17.493752    4477 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0803 18:00:17.494410    4477 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0803 18:00:17.926124    4477 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0803 18:00:17.939035    4477 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0803 18:00:17.939061    4477 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0803 18:00:17.939113    4477 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0803 18:00:17.945147    4477 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0803 18:00:17.949535    4477 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0803 18:00:17.951311    4477 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0803 18:00:17.958789    4477 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0803 18:00:17.958809    4477 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0803 18:00:17.958861    4477 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0803 18:00:17.967227    4477 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0803 18:00:17.967254    4477 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0803 18:00:17.967304    4477 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0803 18:00:17.972890    4477 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0803 18:00:17.973495    4477 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0803 18:00:17.978603    4477 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0803 18:00:17.983906    4477 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	W0803 18:00:17.988796    4477 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0803 18:00:17.988911    4477 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0803 18:00:17.989955    4477 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0803 18:00:17.989971    4477 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0803 18:00:17.989998    4477 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0803 18:00:17.996561    4477 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0803 18:00:17.996585    4477 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0803 18:00:17.996642    4477 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0803 18:00:18.002888    4477 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0803 18:00:18.019367    4477 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0803 18:00:18.019388    4477 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0803 18:00:18.019397    4477 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0803 18:00:18.019408    4477 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0803 18:00:18.019442    4477 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0803 18:00:18.019443    4477 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0803 18:00:18.019369    4477 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0803 18:00:18.019443    4477 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0803 18:00:18.019524    4477 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0803 18:00:18.021385    4477 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0803 18:00:18.021404    4477 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0803 18:00:18.064009    4477 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0803 18:00:18.064025    4477 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0803 18:00:18.064137    4477 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0803 18:00:18.064137    4477 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0803 18:00:18.081482    4477 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0803 18:00:18.081513    4477 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0803 18:00:18.081482    4477 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0803 18:00:18.081532    4477 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0803 18:00:18.113874    4477 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0803 18:00:18.113895    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	W0803 18:00:18.140581    4477 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0803 18:00:18.140697    4477 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0803 18:00:18.188058    4477 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0803 18:00:18.192889    4477 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0803 18:00:18.192904    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0803 18:00:18.199209    4477 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0803 18:00:18.199237    4477 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0803 18:00:18.199294    4477 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0803 18:00:18.282575    4477 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0803 18:00:18.359170    4477 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0803 18:00:18.359183    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0803 18:00:19.667339    4477 ssh_runner.go:235] Completed: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.468052416s)
	I0803 18:00:19.667378    4477 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0803 18:00:19.667339    4477 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load": (1.308175459s)
	I0803 18:00:19.667425    4477 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0803 18:00:19.667878    4477 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0803 18:00:19.673055    4477 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0803 18:00:19.673125    4477 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0803 18:00:19.733305    4477 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0803 18:00:19.733318    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0803 18:00:19.970573    4477 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0803 18:00:19.970620    4477 cache_images.go:92] duration metric: took 2.49305775s to LoadCachedImages
	W0803 18:00:19.970664    4477 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	I0803 18:00:19.970673    4477 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0803 18:00:19.970741    4477 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-359000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-359000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0803 18:00:19.970830    4477 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0803 18:00:19.984330    4477 cni.go:84] Creating CNI manager for ""
	I0803 18:00:19.984342    4477 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0803 18:00:19.984349    4477 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0803 18:00:19.984360    4477 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-359000 NodeName:running-upgrade-359000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0803 18:00:19.984433    4477 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-359000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0803 18:00:19.984486    4477 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0803 18:00:19.987308    4477 binaries.go:44] Found k8s binaries, skipping transfer
	I0803 18:00:19.987335    4477 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0803 18:00:19.990283    4477 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0803 18:00:19.995260    4477 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0803 18:00:20.000319    4477 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0803 18:00:20.005526    4477 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0803 18:00:20.007021    4477 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 18:00:20.088167    4477 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0803 18:00:20.093039    4477 certs.go:68] Setting up /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/running-upgrade-359000 for IP: 10.0.2.15
	I0803 18:00:20.093048    4477 certs.go:194] generating shared ca certs ...
	I0803 18:00:20.093056    4477 certs.go:226] acquiring lock for ca certs: {Name:mk4c6ee72dd2b768bec67e582e0b6b1af1b504e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 18:00:20.093212    4477 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19364-1166/.minikube/ca.key
	I0803 18:00:20.093246    4477 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19364-1166/.minikube/proxy-client-ca.key
	I0803 18:00:20.093250    4477 certs.go:256] generating profile certs ...
	I0803 18:00:20.093312    4477 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/running-upgrade-359000/client.key
	I0803 18:00:20.093327    4477 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/running-upgrade-359000/apiserver.key.45ebc57e
	I0803 18:00:20.093341    4477 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/running-upgrade-359000/apiserver.crt.45ebc57e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0803 18:00:20.211148    4477 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/running-upgrade-359000/apiserver.crt.45ebc57e ...
	I0803 18:00:20.211153    4477 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/running-upgrade-359000/apiserver.crt.45ebc57e: {Name:mk69fdfa57092bfd2a5056fd9a54a6790256ac4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 18:00:20.211578    4477 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/running-upgrade-359000/apiserver.key.45ebc57e ...
	I0803 18:00:20.211583    4477 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/running-upgrade-359000/apiserver.key.45ebc57e: {Name:mkbc2e9eb1fe1397da711cc4ba26873dc0b6c6cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 18:00:20.211703    4477 certs.go:381] copying /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/running-upgrade-359000/apiserver.crt.45ebc57e -> /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/running-upgrade-359000/apiserver.crt
	I0803 18:00:20.211882    4477 certs.go:385] copying /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/running-upgrade-359000/apiserver.key.45ebc57e -> /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/running-upgrade-359000/apiserver.key
	I0803 18:00:20.212006    4477 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/running-upgrade-359000/proxy-client.key
	I0803 18:00:20.212129    4477 certs.go:484] found cert: /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/1673.pem (1338 bytes)
	W0803 18:00:20.212162    4477 certs.go:480] ignoring /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/1673_empty.pem, impossibly tiny 0 bytes
	I0803 18:00:20.212168    4477 certs.go:484] found cert: /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/ca-key.pem (1675 bytes)
	I0803 18:00:20.212189    4477 certs.go:484] found cert: /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/ca.pem (1082 bytes)
	I0803 18:00:20.212207    4477 certs.go:484] found cert: /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/cert.pem (1123 bytes)
	I0803 18:00:20.212224    4477 certs.go:484] found cert: /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/key.pem (1675 bytes)
	I0803 18:00:20.212261    4477 certs.go:484] found cert: /Users/jenkins/minikube-integration/19364-1166/.minikube/files/etc/ssl/certs/16732.pem (1708 bytes)
	I0803 18:00:20.212587    4477 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1166/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0803 18:00:20.221921    4477 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1166/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0803 18:00:20.229318    4477 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1166/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0803 18:00:20.236728    4477 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1166/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0803 18:00:20.244395    4477 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/running-upgrade-359000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0803 18:00:20.250943    4477 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/running-upgrade-359000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0803 18:00:20.258267    4477 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/running-upgrade-359000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0803 18:00:20.265680    4477 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/running-upgrade-359000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0803 18:00:20.273237    4477 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/1673.pem --> /usr/share/ca-certificates/1673.pem (1338 bytes)
	I0803 18:00:20.279765    4477 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1166/.minikube/files/etc/ssl/certs/16732.pem --> /usr/share/ca-certificates/16732.pem (1708 bytes)
	I0803 18:00:20.286744    4477 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1166/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0803 18:00:20.293830    4477 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0803 18:00:20.298701    4477 ssh_runner.go:195] Run: openssl version
	I0803 18:00:20.300564    4477 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1673.pem && ln -fs /usr/share/ca-certificates/1673.pem /etc/ssl/certs/1673.pem"
	I0803 18:00:20.303486    4477 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1673.pem
	I0803 18:00:20.305055    4477 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  4 00:28 /usr/share/ca-certificates/1673.pem
	I0803 18:00:20.305079    4477 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1673.pem
	I0803 18:00:20.306869    4477 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1673.pem /etc/ssl/certs/51391683.0"
	I0803 18:00:20.310155    4477 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16732.pem && ln -fs /usr/share/ca-certificates/16732.pem /etc/ssl/certs/16732.pem"
	I0803 18:00:20.313246    4477 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16732.pem
	I0803 18:00:20.314568    4477 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  4 00:28 /usr/share/ca-certificates/16732.pem
	I0803 18:00:20.314589    4477 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16732.pem
	I0803 18:00:20.316370    4477 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16732.pem /etc/ssl/certs/3ec20f2e.0"
	I0803 18:00:20.319071    4477 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0803 18:00:20.322606    4477 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0803 18:00:20.324228    4477 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  4 00:21 /usr/share/ca-certificates/minikubeCA.pem
	I0803 18:00:20.324245    4477 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0803 18:00:20.325861    4477 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0803 18:00:20.328562    4477 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0803 18:00:20.330018    4477 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0803 18:00:20.331845    4477 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0803 18:00:20.333649    4477 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0803 18:00:20.335626    4477 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0803 18:00:20.337620    4477 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0803 18:00:20.339328    4477 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0803 18:00:20.341201    4477 kubeadm.go:392] StartCluster: {Name:running-upgrade-359000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50291 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-359000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0803 18:00:20.341268    4477 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0803 18:00:20.352774    4477 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0803 18:00:20.355877    4477 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0803 18:00:20.355883    4477 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0803 18:00:20.355906    4477 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0803 18:00:20.358824    4477 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0803 18:00:20.359057    4477 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-359000" does not appear in /Users/jenkins/minikube-integration/19364-1166/kubeconfig
	I0803 18:00:20.359107    4477 kubeconfig.go:62] /Users/jenkins/minikube-integration/19364-1166/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-359000" cluster setting kubeconfig missing "running-upgrade-359000" context setting]
	I0803 18:00:20.359259    4477 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19364-1166/kubeconfig: {Name:mk0a3c55e1982b2d92db1034b47f8334d27942c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 18:00:20.360416    4477 kapi.go:59] client config for running-upgrade-359000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/running-upgrade-359000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/running-upgrade-359000/client.key", CAFile:"/Users/jenkins/minikube-integration/19364-1166/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1045ac1b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0803 18:00:20.360743    4477 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0803 18:00:20.363661    4477 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-359000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0803 18:00:20.363668    4477 kubeadm.go:1160] stopping kube-system containers ...
	I0803 18:00:20.363708    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0803 18:00:20.375188    4477 docker.go:483] Stopping containers: [8997a8c92101 89045c91c75d 187e98f05c13 845caf4ca2c8 4b4d796858e2 eb369f588964 df62fe6ae6da 0b82634b0f3a 65796270a024 15880780270c 8d75cb628f64 b07961993ded]
	I0803 18:00:20.375254    4477 ssh_runner.go:195] Run: docker stop 8997a8c92101 89045c91c75d 187e98f05c13 845caf4ca2c8 4b4d796858e2 eb369f588964 df62fe6ae6da 0b82634b0f3a 65796270a024 15880780270c 8d75cb628f64 b07961993ded
	I0803 18:00:20.386116    4477 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0803 18:00:20.481486    4477 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0803 18:00:20.486039    4477 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5639 Aug  4 00:59 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5649 Aug  4 00:59 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Aug  4 01:00 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5597 Aug  4 00:59 /etc/kubernetes/scheduler.conf
	
	I0803 18:00:20.486069    4477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50291 /etc/kubernetes/admin.conf
	I0803 18:00:20.490029    4477 kubeadm.go:163] "https://control-plane.minikube.internal:50291" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50291 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0803 18:00:20.490052    4477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0803 18:00:20.494060    4477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50291 /etc/kubernetes/kubelet.conf
	I0803 18:00:20.497378    4477 kubeadm.go:163] "https://control-plane.minikube.internal:50291" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50291 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0803 18:00:20.497406    4477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0803 18:00:20.500371    4477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50291 /etc/kubernetes/controller-manager.conf
	I0803 18:00:20.503240    4477 kubeadm.go:163] "https://control-plane.minikube.internal:50291" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50291 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0803 18:00:20.503266    4477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0803 18:00:20.506525    4477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50291 /etc/kubernetes/scheduler.conf
	I0803 18:00:20.509637    4477 kubeadm.go:163] "https://control-plane.minikube.internal:50291" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50291 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0803 18:00:20.509664    4477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0803 18:00:20.512393    4477 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0803 18:00:20.515307    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0803 18:00:20.536307    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0803 18:00:21.419560    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0803 18:00:21.624368    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0803 18:00:21.653024    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0803 18:00:21.677668    4477 api_server.go:52] waiting for apiserver process to appear ...
	I0803 18:00:21.677750    4477 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 18:00:22.179957    4477 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 18:00:22.679792    4477 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 18:00:22.686842    4477 api_server.go:72] duration metric: took 1.009207s to wait for apiserver process to appear ...
	I0803 18:00:22.686852    4477 api_server.go:88] waiting for apiserver healthz status ...
	I0803 18:00:22.686861    4477 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:00:27.688843    4477 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:00:27.688875    4477 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:00:32.689032    4477 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:00:32.689075    4477 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:00:37.689865    4477 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:00:37.689914    4477 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:00:42.690560    4477 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:00:42.690602    4477 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:00:47.691410    4477 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:00:47.691514    4477 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:00:52.692982    4477 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:00:52.693064    4477 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:00:57.694984    4477 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:00:57.695065    4477 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:01:02.697563    4477 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:01:02.697670    4477 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:01:07.700220    4477 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:01:07.700299    4477 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:01:12.702826    4477 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:01:12.702876    4477 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:01:17.704632    4477 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:01:17.704719    4477 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:01:22.709463    4477 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:01:22.709783    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:01:22.748513    4477 logs.go:276] 2 containers: [9e6227842e53 0b82634b0f3a]
	I0803 18:01:22.748640    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:01:22.769669    4477 logs.go:276] 2 containers: [3150a8cd7259 4b4d796858e2]
	I0803 18:01:22.769766    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:01:22.784967    4477 logs.go:276] 1 containers: [837106f253fc]
	I0803 18:01:22.785030    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:01:22.798133    4477 logs.go:276] 2 containers: [910358cdfc3a eb369f588964]
	I0803 18:01:22.798209    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:01:22.809056    4477 logs.go:276] 1 containers: [95d7841031fe]
	I0803 18:01:22.809130    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:01:22.819886    4477 logs.go:276] 2 containers: [2a2fca0a39b4 df62fe6ae6da]
	I0803 18:01:22.819960    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:01:22.830270    4477 logs.go:276] 0 containers: []
	W0803 18:01:22.830280    4477 logs.go:278] No container was found matching "kindnet"
	I0803 18:01:22.830333    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:01:22.840896    4477 logs.go:276] 2 containers: [e2fb415b1036 0913da792538]
	I0803 18:01:22.840912    4477 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:01:22.840918    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:01:22.915311    4477 logs.go:123] Gathering logs for etcd [4b4d796858e2] ...
	I0803 18:01:22.915323    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b4d796858e2"
	I0803 18:01:22.930221    4477 logs.go:123] Gathering logs for kube-scheduler [910358cdfc3a] ...
	I0803 18:01:22.930232    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 910358cdfc3a"
	I0803 18:01:22.942159    4477 logs.go:123] Gathering logs for dmesg ...
	I0803 18:01:22.942168    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:01:22.946530    4477 logs.go:123] Gathering logs for kube-scheduler [eb369f588964] ...
	I0803 18:01:22.946537    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb369f588964"
	I0803 18:01:22.962054    4477 logs.go:123] Gathering logs for kube-proxy [95d7841031fe] ...
	I0803 18:01:22.962067    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95d7841031fe"
	I0803 18:01:22.976788    4477 logs.go:123] Gathering logs for kube-controller-manager [2a2fca0a39b4] ...
	I0803 18:01:22.976798    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a2fca0a39b4"
	I0803 18:01:22.994264    4477 logs.go:123] Gathering logs for storage-provisioner [0913da792538] ...
	I0803 18:01:22.994273    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0913da792538"
	I0803 18:01:23.006064    4477 logs.go:123] Gathering logs for container status ...
	I0803 18:01:23.006075    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:01:23.018829    4477 logs.go:123] Gathering logs for kubelet ...
	I0803 18:01:23.018841    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0803 18:01:23.054577    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:01:23.054670    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:01:23.055269    4477 logs.go:123] Gathering logs for kube-apiserver [0b82634b0f3a] ...
	I0803 18:01:23.055274    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b82634b0f3a"
	I0803 18:01:23.076250    4477 logs.go:123] Gathering logs for etcd [3150a8cd7259] ...
	I0803 18:01:23.076263    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3150a8cd7259"
	I0803 18:01:23.090106    4477 logs.go:123] Gathering logs for kube-controller-manager [df62fe6ae6da] ...
	I0803 18:01:23.090120    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df62fe6ae6da"
	I0803 18:01:23.102377    4477 logs.go:123] Gathering logs for Docker ...
	I0803 18:01:23.102390    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:01:23.127439    4477 logs.go:123] Gathering logs for kube-apiserver [9e6227842e53] ...
	I0803 18:01:23.127447    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e6227842e53"
	I0803 18:01:23.141148    4477 logs.go:123] Gathering logs for storage-provisioner [e2fb415b1036] ...
	I0803 18:01:23.141160    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2fb415b1036"
	I0803 18:01:23.153125    4477 logs.go:123] Gathering logs for coredns [837106f253fc] ...
	I0803 18:01:23.153136    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 837106f253fc"
	I0803 18:01:23.164496    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:01:23.164506    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0803 18:01:23.164534    4477 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0803 18:01:23.164540    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	  Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:01:23.164550    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	  Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:01:23.164555    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:01:23.164557    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:01:33.172462    4477 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:01:38.176375    4477 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:01:38.176815    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:01:38.216308    4477 logs.go:276] 2 containers: [9e6227842e53 0b82634b0f3a]
	I0803 18:01:38.216449    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:01:38.241095    4477 logs.go:276] 2 containers: [3150a8cd7259 4b4d796858e2]
	I0803 18:01:38.241210    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:01:38.256202    4477 logs.go:276] 1 containers: [837106f253fc]
	I0803 18:01:38.256287    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:01:38.268684    4477 logs.go:276] 2 containers: [910358cdfc3a eb369f588964]
	I0803 18:01:38.268753    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:01:38.279260    4477 logs.go:276] 1 containers: [95d7841031fe]
	I0803 18:01:38.279323    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:01:38.290076    4477 logs.go:276] 2 containers: [2a2fca0a39b4 df62fe6ae6da]
	I0803 18:01:38.290146    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:01:38.300604    4477 logs.go:276] 0 containers: []
	W0803 18:01:38.300616    4477 logs.go:278] No container was found matching "kindnet"
	I0803 18:01:38.300676    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:01:38.311098    4477 logs.go:276] 2 containers: [e2fb415b1036 0913da792538]
	I0803 18:01:38.311116    4477 logs.go:123] Gathering logs for kubelet ...
	I0803 18:01:38.311124    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0803 18:01:38.347299    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:01:38.347392    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:01:38.347956    4477 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:01:38.347961    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:01:38.383350    4477 logs.go:123] Gathering logs for storage-provisioner [e2fb415b1036] ...
	I0803 18:01:38.383364    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2fb415b1036"
	I0803 18:01:38.395730    4477 logs.go:123] Gathering logs for storage-provisioner [0913da792538] ...
	I0803 18:01:38.395739    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0913da792538"
	I0803 18:01:38.408291    4477 logs.go:123] Gathering logs for kube-apiserver [0b82634b0f3a] ...
	I0803 18:01:38.408303    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b82634b0f3a"
	I0803 18:01:38.429005    4477 logs.go:123] Gathering logs for etcd [4b4d796858e2] ...
	I0803 18:01:38.429018    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b4d796858e2"
	I0803 18:01:38.443410    4477 logs.go:123] Gathering logs for etcd [3150a8cd7259] ...
	I0803 18:01:38.443421    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3150a8cd7259"
	I0803 18:01:38.457216    4477 logs.go:123] Gathering logs for Docker ...
	I0803 18:01:38.457226    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:01:38.483586    4477 logs.go:123] Gathering logs for container status ...
	I0803 18:01:38.483596    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:01:38.502192    4477 logs.go:123] Gathering logs for dmesg ...
	I0803 18:01:38.502206    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:01:38.507010    4477 logs.go:123] Gathering logs for kube-apiserver [9e6227842e53] ...
	I0803 18:01:38.507016    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e6227842e53"
	I0803 18:01:38.521199    4477 logs.go:123] Gathering logs for kube-scheduler [eb369f588964] ...
	I0803 18:01:38.521212    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb369f588964"
	I0803 18:01:38.536661    4477 logs.go:123] Gathering logs for kube-proxy [95d7841031fe] ...
	I0803 18:01:38.536672    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95d7841031fe"
	I0803 18:01:38.548985    4477 logs.go:123] Gathering logs for kube-controller-manager [2a2fca0a39b4] ...
	I0803 18:01:38.548998    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a2fca0a39b4"
	I0803 18:01:38.566047    4477 logs.go:123] Gathering logs for kube-controller-manager [df62fe6ae6da] ...
	I0803 18:01:38.566059    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df62fe6ae6da"
	I0803 18:01:38.578375    4477 logs.go:123] Gathering logs for coredns [837106f253fc] ...
	I0803 18:01:38.578388    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 837106f253fc"
	I0803 18:01:38.593277    4477 logs.go:123] Gathering logs for kube-scheduler [910358cdfc3a] ...
	I0803 18:01:38.593287    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 910358cdfc3a"
	I0803 18:01:38.605000    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:01:38.605012    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0803 18:01:38.605040    4477 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0803 18:01:38.605045    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	  Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:01:38.605053    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	  Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:01:38.605058    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:01:38.605061    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:01:48.609734    4477 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:01:53.612613    4477 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:01:53.612926    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:01:53.643073    4477 logs.go:276] 2 containers: [9e6227842e53 0b82634b0f3a]
	I0803 18:01:53.643187    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:01:53.660061    4477 logs.go:276] 2 containers: [3150a8cd7259 4b4d796858e2]
	I0803 18:01:53.660141    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:01:53.673416    4477 logs.go:276] 1 containers: [837106f253fc]
	I0803 18:01:53.673488    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:01:53.684940    4477 logs.go:276] 2 containers: [910358cdfc3a eb369f588964]
	I0803 18:01:53.685000    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:01:53.695506    4477 logs.go:276] 1 containers: [95d7841031fe]
	I0803 18:01:53.695576    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:01:53.706171    4477 logs.go:276] 2 containers: [2a2fca0a39b4 df62fe6ae6da]
	I0803 18:01:53.706245    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:01:53.716205    4477 logs.go:276] 0 containers: []
	W0803 18:01:53.716215    4477 logs.go:278] No container was found matching "kindnet"
	I0803 18:01:53.716265    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:01:53.731139    4477 logs.go:276] 2 containers: [e2fb415b1036 0913da792538]
	I0803 18:01:53.731155    4477 logs.go:123] Gathering logs for dmesg ...
	I0803 18:01:53.731161    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:01:53.735457    4477 logs.go:123] Gathering logs for kube-proxy [95d7841031fe] ...
	I0803 18:01:53.735466    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95d7841031fe"
	I0803 18:01:53.748489    4477 logs.go:123] Gathering logs for kube-controller-manager [2a2fca0a39b4] ...
	I0803 18:01:53.748502    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a2fca0a39b4"
	I0803 18:01:53.766530    4477 logs.go:123] Gathering logs for kube-apiserver [0b82634b0f3a] ...
	I0803 18:01:53.766541    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b82634b0f3a"
	I0803 18:01:53.790249    4477 logs.go:123] Gathering logs for etcd [4b4d796858e2] ...
	I0803 18:01:53.790259    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b4d796858e2"
	I0803 18:01:53.804415    4477 logs.go:123] Gathering logs for storage-provisioner [0913da792538] ...
	I0803 18:01:53.804427    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0913da792538"
	I0803 18:01:53.819636    4477 logs.go:123] Gathering logs for Docker ...
	I0803 18:01:53.819645    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:01:53.845286    4477 logs.go:123] Gathering logs for kubelet ...
	I0803 18:01:53.845298    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0803 18:01:53.881779    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:01:53.881878    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:01:53.882480    4477 logs.go:123] Gathering logs for kube-scheduler [910358cdfc3a] ...
	I0803 18:01:53.882484    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 910358cdfc3a"
	I0803 18:01:53.899671    4477 logs.go:123] Gathering logs for etcd [3150a8cd7259] ...
	I0803 18:01:53.899683    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3150a8cd7259"
	I0803 18:01:53.913645    4477 logs.go:123] Gathering logs for coredns [837106f253fc] ...
	I0803 18:01:53.913657    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 837106f253fc"
	I0803 18:01:53.924647    4477 logs.go:123] Gathering logs for kube-scheduler [eb369f588964] ...
	I0803 18:01:53.924660    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb369f588964"
	I0803 18:01:53.938925    4477 logs.go:123] Gathering logs for kube-controller-manager [df62fe6ae6da] ...
	I0803 18:01:53.938935    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df62fe6ae6da"
	I0803 18:01:53.950877    4477 logs.go:123] Gathering logs for storage-provisioner [e2fb415b1036] ...
	I0803 18:01:53.950889    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2fb415b1036"
	I0803 18:01:53.962097    4477 logs.go:123] Gathering logs for container status ...
	I0803 18:01:53.962107    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:01:53.973898    4477 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:01:53.973907    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:01:54.010362    4477 logs.go:123] Gathering logs for kube-apiserver [9e6227842e53] ...
	I0803 18:01:54.010373    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e6227842e53"
	I0803 18:01:54.024200    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:01:54.024212    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0803 18:01:54.024236    4477 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0803 18:01:54.024240    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	  Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:01:54.024244    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	  Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:01:54.024274    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:01:54.024281    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:02:04.028833    4477 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:02:09.031721    4477 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:02:09.032093    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:02:09.064645    4477 logs.go:276] 2 containers: [9e6227842e53 0b82634b0f3a]
	I0803 18:02:09.064768    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:02:09.084611    4477 logs.go:276] 2 containers: [3150a8cd7259 4b4d796858e2]
	I0803 18:02:09.084694    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:02:09.098549    4477 logs.go:276] 1 containers: [837106f253fc]
	I0803 18:02:09.098622    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:02:09.110747    4477 logs.go:276] 2 containers: [910358cdfc3a eb369f588964]
	I0803 18:02:09.110818    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:02:09.121809    4477 logs.go:276] 1 containers: [95d7841031fe]
	I0803 18:02:09.121875    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:02:09.135842    4477 logs.go:276] 2 containers: [2a2fca0a39b4 df62fe6ae6da]
	I0803 18:02:09.135907    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:02:09.146051    4477 logs.go:276] 0 containers: []
	W0803 18:02:09.146067    4477 logs.go:278] No container was found matching "kindnet"
	I0803 18:02:09.146122    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:02:09.156928    4477 logs.go:276] 2 containers: [e2fb415b1036 0913da792538]
	I0803 18:02:09.156946    4477 logs.go:123] Gathering logs for kube-apiserver [0b82634b0f3a] ...
	I0803 18:02:09.156951    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b82634b0f3a"
	I0803 18:02:09.176370    4477 logs.go:123] Gathering logs for etcd [3150a8cd7259] ...
	I0803 18:02:09.176382    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3150a8cd7259"
	I0803 18:02:09.189762    4477 logs.go:123] Gathering logs for kube-controller-manager [df62fe6ae6da] ...
	I0803 18:02:09.189772    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df62fe6ae6da"
	I0803 18:02:09.201747    4477 logs.go:123] Gathering logs for dmesg ...
	I0803 18:02:09.201761    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:02:09.206739    4477 logs.go:123] Gathering logs for coredns [837106f253fc] ...
	I0803 18:02:09.206748    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 837106f253fc"
	I0803 18:02:09.217940    4477 logs.go:123] Gathering logs for kube-controller-manager [2a2fca0a39b4] ...
	I0803 18:02:09.217952    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a2fca0a39b4"
	I0803 18:02:09.236097    4477 logs.go:123] Gathering logs for Docker ...
	I0803 18:02:09.236106    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:02:09.262387    4477 logs.go:123] Gathering logs for kube-apiserver [9e6227842e53] ...
	I0803 18:02:09.262397    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e6227842e53"
	I0803 18:02:09.279283    4477 logs.go:123] Gathering logs for storage-provisioner [0913da792538] ...
	I0803 18:02:09.279295    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0913da792538"
	I0803 18:02:09.290707    4477 logs.go:123] Gathering logs for container status ...
	I0803 18:02:09.290718    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:02:09.302790    4477 logs.go:123] Gathering logs for etcd [4b4d796858e2] ...
	I0803 18:02:09.302802    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b4d796858e2"
	I0803 18:02:09.317075    4477 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:02:09.317086    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:02:09.351387    4477 logs.go:123] Gathering logs for kube-scheduler [910358cdfc3a] ...
	I0803 18:02:09.351398    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 910358cdfc3a"
	I0803 18:02:09.362800    4477 logs.go:123] Gathering logs for kube-scheduler [eb369f588964] ...
	I0803 18:02:09.362811    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb369f588964"
	I0803 18:02:09.377995    4477 logs.go:123] Gathering logs for kube-proxy [95d7841031fe] ...
	I0803 18:02:09.378005    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95d7841031fe"
	I0803 18:02:09.389739    4477 logs.go:123] Gathering logs for storage-provisioner [e2fb415b1036] ...
	I0803 18:02:09.389750    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2fb415b1036"
	I0803 18:02:09.401616    4477 logs.go:123] Gathering logs for kubelet ...
	I0803 18:02:09.401627    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0803 18:02:09.438137    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:02:09.438229    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:02:09.438793    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:02:09.438798    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0803 18:02:09.438822    4477 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0803 18:02:09.438826    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	  Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:02:09.438841    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	  Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:02:09.438844    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:02:09.438847    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:02:19.443053    4477 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:02:24.444236    4477 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:02:24.444678    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:02:24.482581    4477 logs.go:276] 2 containers: [9e6227842e53 0b82634b0f3a]
	I0803 18:02:24.482721    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:02:24.505181    4477 logs.go:276] 2 containers: [3150a8cd7259 4b4d796858e2]
	I0803 18:02:24.505297    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:02:24.521260    4477 logs.go:276] 1 containers: [837106f253fc]
	I0803 18:02:24.521335    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:02:24.534294    4477 logs.go:276] 2 containers: [910358cdfc3a eb369f588964]
	I0803 18:02:24.534367    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:02:24.545151    4477 logs.go:276] 1 containers: [95d7841031fe]
	I0803 18:02:24.545220    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:02:24.556174    4477 logs.go:276] 2 containers: [2a2fca0a39b4 df62fe6ae6da]
	I0803 18:02:24.556244    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:02:24.566744    4477 logs.go:276] 0 containers: []
	W0803 18:02:24.566756    4477 logs.go:278] No container was found matching "kindnet"
	I0803 18:02:24.566813    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:02:24.577628    4477 logs.go:276] 2 containers: [e2fb415b1036 0913da792538]
	I0803 18:02:24.577646    4477 logs.go:123] Gathering logs for storage-provisioner [0913da792538] ...
	I0803 18:02:24.577652    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0913da792538"
	I0803 18:02:24.589079    4477 logs.go:123] Gathering logs for etcd [3150a8cd7259] ...
	I0803 18:02:24.589089    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3150a8cd7259"
	I0803 18:02:24.603174    4477 logs.go:123] Gathering logs for etcd [4b4d796858e2] ...
	I0803 18:02:24.603187    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b4d796858e2"
	I0803 18:02:24.618043    4477 logs.go:123] Gathering logs for storage-provisioner [e2fb415b1036] ...
	I0803 18:02:24.618055    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2fb415b1036"
	I0803 18:02:24.629917    4477 logs.go:123] Gathering logs for kube-controller-manager [2a2fca0a39b4] ...
	I0803 18:02:24.629927    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a2fca0a39b4"
	I0803 18:02:24.648436    4477 logs.go:123] Gathering logs for Docker ...
	I0803 18:02:24.648455    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:02:24.675325    4477 logs.go:123] Gathering logs for dmesg ...
	I0803 18:02:24.675346    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:02:24.680635    4477 logs.go:123] Gathering logs for coredns [837106f253fc] ...
	I0803 18:02:24.680647    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 837106f253fc"
	I0803 18:02:24.695137    4477 logs.go:123] Gathering logs for kube-proxy [95d7841031fe] ...
	I0803 18:02:24.695155    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95d7841031fe"
	I0803 18:02:24.709029    4477 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:02:24.709040    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:02:24.749882    4477 logs.go:123] Gathering logs for kube-apiserver [0b82634b0f3a] ...
	I0803 18:02:24.749898    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b82634b0f3a"
	I0803 18:02:24.771990    4477 logs.go:123] Gathering logs for kube-scheduler [eb369f588964] ...
	I0803 18:02:24.772007    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb369f588964"
	I0803 18:02:24.788282    4477 logs.go:123] Gathering logs for kube-controller-manager [df62fe6ae6da] ...
	I0803 18:02:24.788298    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df62fe6ae6da"
	I0803 18:02:24.805271    4477 logs.go:123] Gathering logs for container status ...
	I0803 18:02:24.805282    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:02:24.817323    4477 logs.go:123] Gathering logs for kubelet ...
	I0803 18:02:24.817332    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0803 18:02:24.852611    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:02:24.852703    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:02:24.853284    4477 logs.go:123] Gathering logs for kube-apiserver [9e6227842e53] ...
	I0803 18:02:24.853289    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e6227842e53"
	I0803 18:02:24.867205    4477 logs.go:123] Gathering logs for kube-scheduler [910358cdfc3a] ...
	I0803 18:02:24.867218    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 910358cdfc3a"
	I0803 18:02:24.879570    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:02:24.879581    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0803 18:02:24.879608    4477 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0803 18:02:24.879614    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	  Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:02:24.879619    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	  Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:02:24.879622    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:02:24.879625    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:02:34.883663    4477 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:02:39.886359    4477 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:02:39.886760    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:02:39.926733    4477 logs.go:276] 2 containers: [9e6227842e53 0b82634b0f3a]
	I0803 18:02:39.926870    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:02:39.947911    4477 logs.go:276] 2 containers: [3150a8cd7259 4b4d796858e2]
	I0803 18:02:39.948019    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:02:39.962894    4477 logs.go:276] 1 containers: [837106f253fc]
	I0803 18:02:39.962961    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:02:39.977398    4477 logs.go:276] 2 containers: [910358cdfc3a eb369f588964]
	I0803 18:02:39.977464    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:02:39.988076    4477 logs.go:276] 1 containers: [95d7841031fe]
	I0803 18:02:39.988134    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:02:40.001940    4477 logs.go:276] 2 containers: [2a2fca0a39b4 df62fe6ae6da]
	I0803 18:02:40.002003    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:02:40.013363    4477 logs.go:276] 0 containers: []
	W0803 18:02:40.013374    4477 logs.go:278] No container was found matching "kindnet"
	I0803 18:02:40.013432    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:02:40.028807    4477 logs.go:276] 2 containers: [e2fb415b1036 0913da792538]
	I0803 18:02:40.028821    4477 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:02:40.028826    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:02:40.063678    4477 logs.go:123] Gathering logs for kube-apiserver [9e6227842e53] ...
	I0803 18:02:40.063692    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e6227842e53"
	I0803 18:02:40.077628    4477 logs.go:123] Gathering logs for coredns [837106f253fc] ...
	I0803 18:02:40.077640    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 837106f253fc"
	I0803 18:02:40.090247    4477 logs.go:123] Gathering logs for kube-controller-manager [2a2fca0a39b4] ...
	I0803 18:02:40.090260    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a2fca0a39b4"
	I0803 18:02:40.110653    4477 logs.go:123] Gathering logs for kube-controller-manager [df62fe6ae6da] ...
	I0803 18:02:40.110665    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df62fe6ae6da"
	I0803 18:02:40.126555    4477 logs.go:123] Gathering logs for storage-provisioner [e2fb415b1036] ...
	I0803 18:02:40.126568    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2fb415b1036"
	I0803 18:02:40.138067    4477 logs.go:123] Gathering logs for kube-scheduler [910358cdfc3a] ...
	I0803 18:02:40.138079    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 910358cdfc3a"
	I0803 18:02:40.153511    4477 logs.go:123] Gathering logs for storage-provisioner [0913da792538] ...
	I0803 18:02:40.153522    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0913da792538"
	I0803 18:02:40.164625    4477 logs.go:123] Gathering logs for container status ...
	I0803 18:02:40.164636    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:02:40.176232    4477 logs.go:123] Gathering logs for etcd [4b4d796858e2] ...
	I0803 18:02:40.176243    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b4d796858e2"
	I0803 18:02:40.190705    4477 logs.go:123] Gathering logs for kube-scheduler [eb369f588964] ...
	I0803 18:02:40.190717    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb369f588964"
	I0803 18:02:40.205367    4477 logs.go:123] Gathering logs for Docker ...
	I0803 18:02:40.205379    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:02:40.228726    4477 logs.go:123] Gathering logs for kubelet ...
	I0803 18:02:40.228734    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0803 18:02:40.262808    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:02:40.262899    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:02:40.263495    4477 logs.go:123] Gathering logs for dmesg ...
	I0803 18:02:40.263499    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:02:40.267829    4477 logs.go:123] Gathering logs for kube-apiserver [0b82634b0f3a] ...
	I0803 18:02:40.267838    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b82634b0f3a"
	I0803 18:02:40.288275    4477 logs.go:123] Gathering logs for etcd [3150a8cd7259] ...
	I0803 18:02:40.288285    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3150a8cd7259"
	I0803 18:02:40.302143    4477 logs.go:123] Gathering logs for kube-proxy [95d7841031fe] ...
	I0803 18:02:40.302153    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95d7841031fe"
	I0803 18:02:40.313956    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:02:40.313968    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0803 18:02:40.313993    4477 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0803 18:02:40.313999    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	  Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:02:40.314003    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	  Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:02:40.314006    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:02:40.314009    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:02:50.317943    4477 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:02:55.320569    4477 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:02:55.321048    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:02:55.363879    4477 logs.go:276] 2 containers: [9e6227842e53 0b82634b0f3a]
	I0803 18:02:55.364018    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:02:55.386840    4477 logs.go:276] 2 containers: [3150a8cd7259 4b4d796858e2]
	I0803 18:02:55.386937    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:02:55.400820    4477 logs.go:276] 1 containers: [837106f253fc]
	I0803 18:02:55.400899    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:02:55.412942    4477 logs.go:276] 2 containers: [910358cdfc3a eb369f588964]
	I0803 18:02:55.413012    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:02:55.423723    4477 logs.go:276] 1 containers: [95d7841031fe]
	I0803 18:02:55.423791    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:02:55.434633    4477 logs.go:276] 2 containers: [2a2fca0a39b4 df62fe6ae6da]
	I0803 18:02:55.434704    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:02:55.444946    4477 logs.go:276] 0 containers: []
	W0803 18:02:55.444956    4477 logs.go:278] No container was found matching "kindnet"
	I0803 18:02:55.445012    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:02:55.456296    4477 logs.go:276] 2 containers: [e2fb415b1036 0913da792538]
	I0803 18:02:55.456315    4477 logs.go:123] Gathering logs for kube-controller-manager [df62fe6ae6da] ...
	I0803 18:02:55.456320    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df62fe6ae6da"
	I0803 18:02:55.473322    4477 logs.go:123] Gathering logs for storage-provisioner [e2fb415b1036] ...
	I0803 18:02:55.473333    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2fb415b1036"
	I0803 18:02:55.484742    4477 logs.go:123] Gathering logs for storage-provisioner [0913da792538] ...
	I0803 18:02:55.484753    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0913da792538"
	I0803 18:02:55.496570    4477 logs.go:123] Gathering logs for Docker ...
	I0803 18:02:55.496579    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:02:55.521412    4477 logs.go:123] Gathering logs for dmesg ...
	I0803 18:02:55.521422    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:02:55.525733    4477 logs.go:123] Gathering logs for coredns [837106f253fc] ...
	I0803 18:02:55.525742    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 837106f253fc"
	I0803 18:02:55.537099    4477 logs.go:123] Gathering logs for kube-proxy [95d7841031fe] ...
	I0803 18:02:55.537111    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95d7841031fe"
	I0803 18:02:55.550045    4477 logs.go:123] Gathering logs for etcd [3150a8cd7259] ...
	I0803 18:02:55.550057    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3150a8cd7259"
	I0803 18:02:55.565656    4477 logs.go:123] Gathering logs for kube-scheduler [910358cdfc3a] ...
	I0803 18:02:55.565665    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 910358cdfc3a"
	I0803 18:02:55.585480    4477 logs.go:123] Gathering logs for container status ...
	I0803 18:02:55.585492    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:02:55.597776    4477 logs.go:123] Gathering logs for kube-scheduler [eb369f588964] ...
	I0803 18:02:55.597786    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb369f588964"
	I0803 18:02:55.613186    4477 logs.go:123] Gathering logs for kubelet ...
	I0803 18:02:55.613199    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0803 18:02:55.648789    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:02:55.648882    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:02:55.649454    4477 logs.go:123] Gathering logs for kube-apiserver [0b82634b0f3a] ...
	I0803 18:02:55.649460    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b82634b0f3a"
	I0803 18:02:55.669315    4477 logs.go:123] Gathering logs for etcd [4b4d796858e2] ...
	I0803 18:02:55.669326    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b4d796858e2"
	I0803 18:02:55.683189    4477 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:02:55.683200    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:02:55.717011    4477 logs.go:123] Gathering logs for kube-apiserver [9e6227842e53] ...
	I0803 18:02:55.717022    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e6227842e53"
	I0803 18:02:55.731160    4477 logs.go:123] Gathering logs for kube-controller-manager [2a2fca0a39b4] ...
	I0803 18:02:55.731169    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a2fca0a39b4"
	I0803 18:02:55.748606    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:02:55.748615    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0803 18:02:55.748643    4477 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0803 18:02:55.748648    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	  Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:02:55.748651    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	  Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:02:55.748655    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:02:55.748658    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:03:05.752526    4477 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:03:10.754666    4477 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:03:10.754791    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:03:10.766290    4477 logs.go:276] 2 containers: [9e6227842e53 0b82634b0f3a]
	I0803 18:03:10.766365    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:03:10.778426    4477 logs.go:276] 2 containers: [3150a8cd7259 4b4d796858e2]
	I0803 18:03:10.778496    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:03:10.789803    4477 logs.go:276] 1 containers: [837106f253fc]
	I0803 18:03:10.789870    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:03:10.800722    4477 logs.go:276] 2 containers: [910358cdfc3a eb369f588964]
	I0803 18:03:10.800791    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:03:10.811488    4477 logs.go:276] 1 containers: [95d7841031fe]
	I0803 18:03:10.811563    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:03:10.822906    4477 logs.go:276] 2 containers: [2a2fca0a39b4 df62fe6ae6da]
	I0803 18:03:10.822974    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:03:10.840719    4477 logs.go:276] 0 containers: []
	W0803 18:03:10.840730    4477 logs.go:278] No container was found matching "kindnet"
	I0803 18:03:10.840787    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:03:10.860193    4477 logs.go:276] 2 containers: [e2fb415b1036 0913da792538]
	I0803 18:03:10.860211    4477 logs.go:123] Gathering logs for dmesg ...
	I0803 18:03:10.860216    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:03:10.864828    4477 logs.go:123] Gathering logs for kube-controller-manager [df62fe6ae6da] ...
	I0803 18:03:10.864836    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df62fe6ae6da"
	I0803 18:03:10.877169    4477 logs.go:123] Gathering logs for storage-provisioner [0913da792538] ...
	I0803 18:03:10.877181    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0913da792538"
	I0803 18:03:10.888788    4477 logs.go:123] Gathering logs for kube-apiserver [9e6227842e53] ...
	I0803 18:03:10.888799    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e6227842e53"
	I0803 18:03:10.903044    4477 logs.go:123] Gathering logs for kube-apiserver [0b82634b0f3a] ...
	I0803 18:03:10.903057    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b82634b0f3a"
	I0803 18:03:10.926414    4477 logs.go:123] Gathering logs for etcd [3150a8cd7259] ...
	I0803 18:03:10.926434    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3150a8cd7259"
	I0803 18:03:10.944065    4477 logs.go:123] Gathering logs for coredns [837106f253fc] ...
	I0803 18:03:10.944077    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 837106f253fc"
	I0803 18:03:10.957022    4477 logs.go:123] Gathering logs for kube-scheduler [eb369f588964] ...
	I0803 18:03:10.957036    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb369f588964"
	I0803 18:03:10.973454    4477 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:03:10.973473    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:03:11.013548    4477 logs.go:123] Gathering logs for kube-proxy [95d7841031fe] ...
	I0803 18:03:11.013560    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95d7841031fe"
	I0803 18:03:11.026440    4477 logs.go:123] Gathering logs for container status ...
	I0803 18:03:11.026451    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:03:11.039611    4477 logs.go:123] Gathering logs for kubelet ...
	I0803 18:03:11.039626    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0803 18:03:11.078030    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:03:11.078128    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:03:11.078733    4477 logs.go:123] Gathering logs for etcd [4b4d796858e2] ...
	I0803 18:03:11.078743    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b4d796858e2"
	I0803 18:03:11.093785    4477 logs.go:123] Gathering logs for kube-scheduler [910358cdfc3a] ...
	I0803 18:03:11.093795    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 910358cdfc3a"
	I0803 18:03:11.106358    4477 logs.go:123] Gathering logs for kube-controller-manager [2a2fca0a39b4] ...
	I0803 18:03:11.106369    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a2fca0a39b4"
	I0803 18:03:11.127991    4477 logs.go:123] Gathering logs for storage-provisioner [e2fb415b1036] ...
	I0803 18:03:11.128003    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2fb415b1036"
	I0803 18:03:11.140450    4477 logs.go:123] Gathering logs for Docker ...
	I0803 18:03:11.140461    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:03:11.165307    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:03:11.165320    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0803 18:03:11.165350    4477 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0803 18:03:11.165357    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	  Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:03:11.165361    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	  Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:03:11.165366    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:03:11.165369    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:03:21.169256    4477 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:03:26.171482    4477 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:03:26.171617    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:03:26.187105    4477 logs.go:276] 2 containers: [9e6227842e53 0b82634b0f3a]
	I0803 18:03:26.187187    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:03:26.199171    4477 logs.go:276] 2 containers: [3150a8cd7259 4b4d796858e2]
	I0803 18:03:26.199241    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:03:26.210097    4477 logs.go:276] 1 containers: [837106f253fc]
	I0803 18:03:26.210167    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:03:26.220892    4477 logs.go:276] 2 containers: [910358cdfc3a eb369f588964]
	I0803 18:03:26.220965    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:03:26.231045    4477 logs.go:276] 1 containers: [95d7841031fe]
	I0803 18:03:26.231115    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:03:26.241637    4477 logs.go:276] 2 containers: [2a2fca0a39b4 df62fe6ae6da]
	I0803 18:03:26.241705    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:03:26.251721    4477 logs.go:276] 0 containers: []
	W0803 18:03:26.251732    4477 logs.go:278] No container was found matching "kindnet"
	I0803 18:03:26.251787    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:03:26.262049    4477 logs.go:276] 2 containers: [e2fb415b1036 0913da792538]
	I0803 18:03:26.262067    4477 logs.go:123] Gathering logs for etcd [4b4d796858e2] ...
	I0803 18:03:26.262073    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b4d796858e2"
	I0803 18:03:26.276517    4477 logs.go:123] Gathering logs for kube-controller-manager [2a2fca0a39b4] ...
	I0803 18:03:26.276525    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a2fca0a39b4"
	I0803 18:03:26.293925    4477 logs.go:123] Gathering logs for Docker ...
	I0803 18:03:26.293936    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:03:26.318451    4477 logs.go:123] Gathering logs for kube-scheduler [eb369f588964] ...
	I0803 18:03:26.318460    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb369f588964"
	I0803 18:03:26.333011    4477 logs.go:123] Gathering logs for kube-proxy [95d7841031fe] ...
	I0803 18:03:26.333024    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95d7841031fe"
	I0803 18:03:26.344988    4477 logs.go:123] Gathering logs for storage-provisioner [0913da792538] ...
	I0803 18:03:26.345002    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0913da792538"
	I0803 18:03:26.356243    4477 logs.go:123] Gathering logs for dmesg ...
	I0803 18:03:26.356256    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:03:26.360719    4477 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:03:26.360729    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:03:26.395260    4477 logs.go:123] Gathering logs for kube-apiserver [9e6227842e53] ...
	I0803 18:03:26.395276    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e6227842e53"
	I0803 18:03:26.410059    4477 logs.go:123] Gathering logs for kube-apiserver [0b82634b0f3a] ...
	I0803 18:03:26.410070    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b82634b0f3a"
	I0803 18:03:26.434259    4477 logs.go:123] Gathering logs for kubelet ...
	I0803 18:03:26.434272    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0803 18:03:26.472170    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:03:26.472265    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:03:26.472861    4477 logs.go:123] Gathering logs for coredns [837106f253fc] ...
	I0803 18:03:26.472866    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 837106f253fc"
	I0803 18:03:26.486040    4477 logs.go:123] Gathering logs for kube-scheduler [910358cdfc3a] ...
	I0803 18:03:26.486053    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 910358cdfc3a"
	I0803 18:03:26.497546    4477 logs.go:123] Gathering logs for etcd [3150a8cd7259] ...
	I0803 18:03:26.497557    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3150a8cd7259"
	I0803 18:03:26.511278    4477 logs.go:123] Gathering logs for kube-controller-manager [df62fe6ae6da] ...
	I0803 18:03:26.511289    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df62fe6ae6da"
	I0803 18:03:26.525726    4477 logs.go:123] Gathering logs for storage-provisioner [e2fb415b1036] ...
	I0803 18:03:26.525738    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2fb415b1036"
	I0803 18:03:26.537835    4477 logs.go:123] Gathering logs for container status ...
	I0803 18:03:26.537846    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:03:26.550422    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:03:26.550433    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0803 18:03:26.550459    4477 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0803 18:03:26.550464    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	  Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:03:26.550467    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	  Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:03:26.550472    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:03:26.550508    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:03:36.553825    4477 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:03:41.555944    4477 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:03:41.556060    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:03:41.567642    4477 logs.go:276] 2 containers: [9e6227842e53 0b82634b0f3a]
	I0803 18:03:41.567723    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:03:41.578635    4477 logs.go:276] 2 containers: [3150a8cd7259 4b4d796858e2]
	I0803 18:03:41.578704    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:03:41.590400    4477 logs.go:276] 1 containers: [837106f253fc]
	I0803 18:03:41.590473    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:03:41.601975    4477 logs.go:276] 2 containers: [910358cdfc3a eb369f588964]
	I0803 18:03:41.602050    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:03:41.613144    4477 logs.go:276] 1 containers: [95d7841031fe]
	I0803 18:03:41.613210    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:03:41.624277    4477 logs.go:276] 2 containers: [2a2fca0a39b4 df62fe6ae6da]
	I0803 18:03:41.624343    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:03:41.634179    4477 logs.go:276] 0 containers: []
	W0803 18:03:41.634189    4477 logs.go:278] No container was found matching "kindnet"
	I0803 18:03:41.634240    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:03:41.644848    4477 logs.go:276] 2 containers: [e2fb415b1036 0913da792538]
	I0803 18:03:41.644867    4477 logs.go:123] Gathering logs for kube-proxy [95d7841031fe] ...
	I0803 18:03:41.644873    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95d7841031fe"
	I0803 18:03:41.657280    4477 logs.go:123] Gathering logs for storage-provisioner [0913da792538] ...
	I0803 18:03:41.657293    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0913da792538"
	I0803 18:03:41.673767    4477 logs.go:123] Gathering logs for Docker ...
	I0803 18:03:41.673779    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:03:41.699026    4477 logs.go:123] Gathering logs for dmesg ...
	I0803 18:03:41.699033    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:03:41.703236    4477 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:03:41.703242    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:03:41.738427    4477 logs.go:123] Gathering logs for container status ...
	I0803 18:03:41.738439    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:03:41.750464    4477 logs.go:123] Gathering logs for kube-apiserver [9e6227842e53] ...
	I0803 18:03:41.750478    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e6227842e53"
	I0803 18:03:41.765038    4477 logs.go:123] Gathering logs for kube-apiserver [0b82634b0f3a] ...
	I0803 18:03:41.765049    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b82634b0f3a"
	I0803 18:03:41.788034    4477 logs.go:123] Gathering logs for coredns [837106f253fc] ...
	I0803 18:03:41.788044    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 837106f253fc"
	I0803 18:03:41.799603    4477 logs.go:123] Gathering logs for kube-scheduler [910358cdfc3a] ...
	I0803 18:03:41.799617    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 910358cdfc3a"
	I0803 18:03:41.811531    4477 logs.go:123] Gathering logs for kube-scheduler [eb369f588964] ...
	I0803 18:03:41.811543    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb369f588964"
	I0803 18:03:41.826230    4477 logs.go:123] Gathering logs for kube-controller-manager [df62fe6ae6da] ...
	I0803 18:03:41.826243    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df62fe6ae6da"
	I0803 18:03:41.844076    4477 logs.go:123] Gathering logs for kubelet ...
	I0803 18:03:41.844089    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0803 18:03:41.880549    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:03:41.880641    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:03:41.881238    4477 logs.go:123] Gathering logs for etcd [3150a8cd7259] ...
	I0803 18:03:41.881243    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3150a8cd7259"
	I0803 18:03:41.896218    4477 logs.go:123] Gathering logs for etcd [4b4d796858e2] ...
	I0803 18:03:41.896233    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b4d796858e2"
	I0803 18:03:41.911642    4477 logs.go:123] Gathering logs for kube-controller-manager [2a2fca0a39b4] ...
	I0803 18:03:41.911654    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a2fca0a39b4"
	I0803 18:03:41.929600    4477 logs.go:123] Gathering logs for storage-provisioner [e2fb415b1036] ...
	I0803 18:03:41.929614    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2fb415b1036"
	I0803 18:03:41.950427    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:03:41.950437    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0803 18:03:41.950466    4477 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0803 18:03:41.950469    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	  Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:03:41.950473    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	  Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:03:41.950476    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:03:41.950479    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:03:51.950997    4477 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:03:56.953035    4477 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:03:56.953156    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:03:56.964654    4477 logs.go:276] 2 containers: [9e6227842e53 0b82634b0f3a]
	I0803 18:03:56.964717    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:03:56.975524    4477 logs.go:276] 2 containers: [3150a8cd7259 4b4d796858e2]
	I0803 18:03:56.975583    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:03:56.986143    4477 logs.go:276] 1 containers: [837106f253fc]
	I0803 18:03:56.986210    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:03:56.999164    4477 logs.go:276] 2 containers: [910358cdfc3a eb369f588964]
	I0803 18:03:56.999234    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:03:57.010137    4477 logs.go:276] 1 containers: [95d7841031fe]
	I0803 18:03:57.010203    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:03:57.020783    4477 logs.go:276] 2 containers: [2a2fca0a39b4 df62fe6ae6da]
	I0803 18:03:57.020845    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:03:57.031173    4477 logs.go:276] 0 containers: []
	W0803 18:03:57.031189    4477 logs.go:278] No container was found matching "kindnet"
	I0803 18:03:57.031248    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:03:57.041878    4477 logs.go:276] 2 containers: [e2fb415b1036 0913da792538]
	I0803 18:03:57.041893    4477 logs.go:123] Gathering logs for coredns [837106f253fc] ...
	I0803 18:03:57.041899    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 837106f253fc"
	I0803 18:03:57.053337    4477 logs.go:123] Gathering logs for kube-proxy [95d7841031fe] ...
	I0803 18:03:57.053347    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95d7841031fe"
	I0803 18:03:57.065826    4477 logs.go:123] Gathering logs for dmesg ...
	I0803 18:03:57.065837    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:03:57.070472    4477 logs.go:123] Gathering logs for kube-apiserver [9e6227842e53] ...
	I0803 18:03:57.070481    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e6227842e53"
	I0803 18:03:57.091624    4477 logs.go:123] Gathering logs for kube-apiserver [0b82634b0f3a] ...
	I0803 18:03:57.091634    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b82634b0f3a"
	I0803 18:03:57.113728    4477 logs.go:123] Gathering logs for etcd [4b4d796858e2] ...
	I0803 18:03:57.113740    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b4d796858e2"
	I0803 18:03:57.129335    4477 logs.go:123] Gathering logs for kube-controller-manager [2a2fca0a39b4] ...
	I0803 18:03:57.129348    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a2fca0a39b4"
	I0803 18:03:57.148130    4477 logs.go:123] Gathering logs for storage-provisioner [e2fb415b1036] ...
	I0803 18:03:57.148139    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2fb415b1036"
	I0803 18:03:57.161245    4477 logs.go:123] Gathering logs for kubelet ...
	I0803 18:03:57.161254    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0803 18:03:57.198810    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:03:57.198908    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:03:57.199508    4477 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:03:57.199514    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:03:57.238260    4477 logs.go:123] Gathering logs for etcd [3150a8cd7259] ...
	I0803 18:03:57.238272    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3150a8cd7259"
	I0803 18:03:57.258075    4477 logs.go:123] Gathering logs for kube-scheduler [eb369f588964] ...
	I0803 18:03:57.258083    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb369f588964"
	I0803 18:03:57.274510    4477 logs.go:123] Gathering logs for Docker ...
	I0803 18:03:57.274519    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:03:57.301024    4477 logs.go:123] Gathering logs for container status ...
	I0803 18:03:57.301047    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:03:57.313970    4477 logs.go:123] Gathering logs for kube-scheduler [910358cdfc3a] ...
	I0803 18:03:57.313986    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 910358cdfc3a"
	I0803 18:03:57.326443    4477 logs.go:123] Gathering logs for kube-controller-manager [df62fe6ae6da] ...
	I0803 18:03:57.326456    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df62fe6ae6da"
	I0803 18:03:57.338341    4477 logs.go:123] Gathering logs for storage-provisioner [0913da792538] ...
	I0803 18:03:57.338352    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0913da792538"
	I0803 18:03:57.350734    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:03:57.350746    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0803 18:03:57.350775    4477 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0803 18:03:57.350781    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	  Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:03:57.350785    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	  Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:03:57.350789    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:03:57.350792    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:04:07.354630    4477 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:04:12.356725    4477 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:04:12.356870    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:04:12.370835    4477 logs.go:276] 2 containers: [9e6227842e53 0b82634b0f3a]
	I0803 18:04:12.370919    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:04:12.382119    4477 logs.go:276] 2 containers: [3150a8cd7259 4b4d796858e2]
	I0803 18:04:12.382191    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:04:12.392394    4477 logs.go:276] 1 containers: [837106f253fc]
	I0803 18:04:12.392463    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:04:12.407931    4477 logs.go:276] 2 containers: [910358cdfc3a eb369f588964]
	I0803 18:04:12.408004    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:04:12.418801    4477 logs.go:276] 1 containers: [95d7841031fe]
	I0803 18:04:12.418874    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:04:12.429943    4477 logs.go:276] 2 containers: [2a2fca0a39b4 df62fe6ae6da]
	I0803 18:04:12.430017    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:04:12.441729    4477 logs.go:276] 0 containers: []
	W0803 18:04:12.441740    4477 logs.go:278] No container was found matching "kindnet"
	I0803 18:04:12.441796    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:04:12.457614    4477 logs.go:276] 2 containers: [e2fb415b1036 0913da792538]
	I0803 18:04:12.457630    4477 logs.go:123] Gathering logs for dmesg ...
	I0803 18:04:12.457637    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:04:12.461964    4477 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:04:12.461972    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:04:12.496670    4477 logs.go:123] Gathering logs for kube-scheduler [eb369f588964] ...
	I0803 18:04:12.496680    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb369f588964"
	I0803 18:04:12.512300    4477 logs.go:123] Gathering logs for kube-proxy [95d7841031fe] ...
	I0803 18:04:12.512312    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95d7841031fe"
	I0803 18:04:12.524550    4477 logs.go:123] Gathering logs for kubelet ...
	I0803 18:04:12.524560    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0803 18:04:12.559238    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:04:12.559330    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:04:12.559926    4477 logs.go:123] Gathering logs for coredns [837106f253fc] ...
	I0803 18:04:12.559930    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 837106f253fc"
	I0803 18:04:12.571056    4477 logs.go:123] Gathering logs for kube-scheduler [910358cdfc3a] ...
	I0803 18:04:12.571068    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 910358cdfc3a"
	I0803 18:04:12.582804    4477 logs.go:123] Gathering logs for kube-controller-manager [2a2fca0a39b4] ...
	I0803 18:04:12.582816    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a2fca0a39b4"
	I0803 18:04:12.600055    4477 logs.go:123] Gathering logs for storage-provisioner [0913da792538] ...
	I0803 18:04:12.600066    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0913da792538"
	I0803 18:04:12.612261    4477 logs.go:123] Gathering logs for container status ...
	I0803 18:04:12.612272    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:04:12.624293    4477 logs.go:123] Gathering logs for kube-apiserver [9e6227842e53] ...
	I0803 18:04:12.624304    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e6227842e53"
	I0803 18:04:12.638708    4477 logs.go:123] Gathering logs for kube-apiserver [0b82634b0f3a] ...
	I0803 18:04:12.638719    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b82634b0f3a"
	I0803 18:04:12.665785    4477 logs.go:123] Gathering logs for etcd [3150a8cd7259] ...
	I0803 18:04:12.665796    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3150a8cd7259"
	I0803 18:04:12.680137    4477 logs.go:123] Gathering logs for Docker ...
	I0803 18:04:12.680148    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:04:12.702475    4477 logs.go:123] Gathering logs for etcd [4b4d796858e2] ...
	I0803 18:04:12.702481    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b4d796858e2"
	I0803 18:04:12.718910    4477 logs.go:123] Gathering logs for kube-controller-manager [df62fe6ae6da] ...
	I0803 18:04:12.718921    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df62fe6ae6da"
	I0803 18:04:12.730539    4477 logs.go:123] Gathering logs for storage-provisioner [e2fb415b1036] ...
	I0803 18:04:12.730549    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2fb415b1036"
	I0803 18:04:12.742662    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:04:12.742672    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0803 18:04:12.742701    4477 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0803 18:04:12.742706    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	  Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:04:12.742710    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	  Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:04:12.742715    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:04:12.742717    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:04:22.746577    4477 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:04:27.748788    4477 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:04:27.748850    4477 kubeadm.go:597] duration metric: took 4m7.387518375s to restartPrimaryControlPlane
	W0803 18:04:27.748931    4477 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0803 18:04:27.748956    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0803 18:04:28.746050    4477 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 18:04:28.750863    4477 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0803 18:04:28.753643    4477 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0803 18:04:28.756339    4477 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0803 18:04:28.756346    4477 kubeadm.go:157] found existing configuration files:
	
	I0803 18:04:28.756369    4477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50291 /etc/kubernetes/admin.conf
	I0803 18:04:28.759319    4477 kubeadm.go:163] "https://control-plane.minikube.internal:50291" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50291 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0803 18:04:28.759346    4477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0803 18:04:28.762043    4477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50291 /etc/kubernetes/kubelet.conf
	I0803 18:04:28.764618    4477 kubeadm.go:163] "https://control-plane.minikube.internal:50291" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50291 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0803 18:04:28.764641    4477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0803 18:04:28.767455    4477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50291 /etc/kubernetes/controller-manager.conf
	I0803 18:04:28.770018    4477 kubeadm.go:163] "https://control-plane.minikube.internal:50291" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50291 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0803 18:04:28.770040    4477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0803 18:04:28.772577    4477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50291 /etc/kubernetes/scheduler.conf
	I0803 18:04:28.775535    4477 kubeadm.go:163] "https://control-plane.minikube.internal:50291" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50291 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0803 18:04:28.775556    4477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0803 18:04:28.778106    4477 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0803 18:04:28.795271    4477 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0803 18:04:28.795301    4477 kubeadm.go:310] [preflight] Running pre-flight checks
	I0803 18:04:28.850440    4477 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0803 18:04:28.850499    4477 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0803 18:04:28.850548    4477 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0803 18:04:28.899476    4477 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0803 18:04:28.903632    4477 out.go:204]   - Generating certificates and keys ...
	I0803 18:04:28.903663    4477 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0803 18:04:28.903700    4477 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0803 18:04:28.903744    4477 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0803 18:04:28.903777    4477 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0803 18:04:28.903815    4477 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0803 18:04:28.903842    4477 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0803 18:04:28.903874    4477 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0803 18:04:28.903902    4477 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0803 18:04:28.903939    4477 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0803 18:04:28.903986    4477 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0803 18:04:28.904005    4477 kubeadm.go:310] [certs] Using the existing "sa" key
	I0803 18:04:28.904035    4477 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0803 18:04:29.041141    4477 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0803 18:04:29.151241    4477 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0803 18:04:29.226313    4477 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0803 18:04:29.269217    4477 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0803 18:04:29.298616    4477 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0803 18:04:29.298960    4477 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0803 18:04:29.299076    4477 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0803 18:04:29.387750    4477 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0803 18:04:29.390546    4477 out.go:204]   - Booting up control plane ...
	I0803 18:04:29.390593    4477 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0803 18:04:29.390633    4477 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0803 18:04:29.390666    4477 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0803 18:04:29.390715    4477 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0803 18:04:29.390804    4477 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0803 18:04:33.390097    4477 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.003797 seconds
	I0803 18:04:33.390176    4477 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0803 18:04:33.394324    4477 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0803 18:04:33.907400    4477 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0803 18:04:33.907628    4477 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-359000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0803 18:04:34.411289    4477 kubeadm.go:310] [bootstrap-token] Using token: hwjfj5.fjkrvrvpyv2v02j4
	I0803 18:04:34.417538    4477 out.go:204]   - Configuring RBAC rules ...
	I0803 18:04:34.417609    4477 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0803 18:04:34.417651    4477 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0803 18:04:34.419272    4477 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0803 18:04:34.421318    4477 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0803 18:04:34.422134    4477 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0803 18:04:34.423265    4477 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0803 18:04:34.427665    4477 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0803 18:04:34.595100    4477 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0803 18:04:34.815345    4477 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0803 18:04:34.815859    4477 kubeadm.go:310] 
	I0803 18:04:34.815892    4477 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0803 18:04:34.815895    4477 kubeadm.go:310] 
	I0803 18:04:34.815930    4477 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0803 18:04:34.815933    4477 kubeadm.go:310] 
	I0803 18:04:34.815945    4477 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0803 18:04:34.815978    4477 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0803 18:04:34.816002    4477 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0803 18:04:34.816022    4477 kubeadm.go:310] 
	I0803 18:04:34.816053    4477 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0803 18:04:34.816056    4477 kubeadm.go:310] 
	I0803 18:04:34.816081    4477 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0803 18:04:34.816085    4477 kubeadm.go:310] 
	I0803 18:04:34.816124    4477 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0803 18:04:34.816166    4477 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0803 18:04:34.816210    4477 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0803 18:04:34.816213    4477 kubeadm.go:310] 
	I0803 18:04:34.816263    4477 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0803 18:04:34.816312    4477 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0803 18:04:34.816318    4477 kubeadm.go:310] 
	I0803 18:04:34.816363    4477 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token hwjfj5.fjkrvrvpyv2v02j4 \
	I0803 18:04:34.816420    4477 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:8926886cd496fcdb8fb5b92a5ce19b9a5533dd397e42f479b7664c72b739cada \
	I0803 18:04:34.816434    4477 kubeadm.go:310] 	--control-plane 
	I0803 18:04:34.816438    4477 kubeadm.go:310] 
	I0803 18:04:34.816481    4477 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0803 18:04:34.816485    4477 kubeadm.go:310] 
	I0803 18:04:34.816527    4477 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token hwjfj5.fjkrvrvpyv2v02j4 \
	I0803 18:04:34.816587    4477 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:8926886cd496fcdb8fb5b92a5ce19b9a5533dd397e42f479b7664c72b739cada 
	I0803 18:04:34.816661    4477 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0803 18:04:34.816668    4477 cni.go:84] Creating CNI manager for ""
	I0803 18:04:34.816675    4477 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0803 18:04:34.820381    4477 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0803 18:04:34.827425    4477 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0803 18:04:34.830313    4477 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0803 18:04:34.834970    4477 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0803 18:04:34.835010    4477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 18:04:34.835028    4477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-359000 minikube.k8s.io/updated_at=2024_08_03T18_04_34_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=40edfa2c305b426121bc436e9f593c10662235e6 minikube.k8s.io/name=running-upgrade-359000 minikube.k8s.io/primary=true
	I0803 18:04:34.876009    4477 kubeadm.go:1113] duration metric: took 41.030833ms to wait for elevateKubeSystemPrivileges
	I0803 18:04:34.876019    4477 ops.go:34] apiserver oom_adj: -16
	I0803 18:04:34.876101    4477 kubeadm.go:394] duration metric: took 4m14.529663667s to StartCluster
	I0803 18:04:34.876112    4477 settings.go:142] acquiring lock: {Name:mkc455f89a0a1d96857baea22a1ca4141ab02c5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 18:04:34.876201    4477 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19364-1166/kubeconfig
	I0803 18:04:34.876564    4477 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19364-1166/kubeconfig: {Name:mk0a3c55e1982b2d92db1034b47f8334d27942c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 18:04:34.876784    4477 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 18:04:34.876840    4477 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0803 18:04:34.876877    4477 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-359000"
	I0803 18:04:34.876889    4477 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-359000"
	W0803 18:04:34.876892    4477 addons.go:243] addon storage-provisioner should already be in state true
	I0803 18:04:34.876931    4477 host.go:66] Checking if "running-upgrade-359000" exists ...
	I0803 18:04:34.876933    4477 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-359000"
	I0803 18:04:34.876945    4477 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-359000"
	I0803 18:04:34.876904    4477 config.go:182] Loaded profile config "running-upgrade-359000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0803 18:04:34.877853    4477 kapi.go:59] client config for running-upgrade-359000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/running-upgrade-359000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/running-upgrade-359000/client.key", CAFile:"/Users/jenkins/minikube-integration/19364-1166/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1045ac1b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0803 18:04:34.877973    4477 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-359000"
	W0803 18:04:34.877980    4477 addons.go:243] addon default-storageclass should already be in state true
	I0803 18:04:34.877987    4477 host.go:66] Checking if "running-upgrade-359000" exists ...
	I0803 18:04:34.880305    4477 out.go:177] * Verifying Kubernetes components...
	I0803 18:04:34.880643    4477 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0803 18:04:34.884513    4477 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0803 18:04:34.884521    4477 sshutil.go:53] new ssh client: &{IP:localhost Port:50259 SSHKeyPath:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/running-upgrade-359000/id_rsa Username:docker}
	I0803 18:04:34.888162    4477 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0803 18:04:34.892316    4477 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 18:04:34.896382    4477 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0803 18:04:34.896388    4477 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0803 18:04:34.896395    4477 sshutil.go:53] new ssh client: &{IP:localhost Port:50259 SSHKeyPath:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/running-upgrade-359000/id_rsa Username:docker}
	I0803 18:04:34.966930    4477 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0803 18:04:34.972008    4477 api_server.go:52] waiting for apiserver process to appear ...
	I0803 18:04:34.972045    4477 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 18:04:34.975761    4477 api_server.go:72] duration metric: took 98.968083ms to wait for apiserver process to appear ...
	I0803 18:04:34.975770    4477 api_server.go:88] waiting for apiserver healthz status ...
	I0803 18:04:34.975778    4477 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:04:34.981590    4477 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0803 18:04:34.998159    4477 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0803 18:04:39.977743    4477 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:04:39.977773    4477 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:04:44.977937    4477 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:04:44.977979    4477 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:04:49.978271    4477 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:04:49.978321    4477 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:04:54.978769    4477 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:04:54.978823    4477 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:04:59.979484    4477 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:04:59.979525    4477 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:05:04.980351    4477 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:05:04.980394    4477 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0803 18:05:05.326006    4477 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0803 18:05:05.329255    4477 out.go:177] * Enabled addons: storage-provisioner
	I0803 18:05:05.337199    4477 addons.go:510] duration metric: took 30.461265666s for enable addons: enabled=[storage-provisioner]
	I0803 18:05:09.981462    4477 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:05:09.981505    4477 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:05:14.982804    4477 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:05:14.982847    4477 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:05:19.984575    4477 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:05:19.984620    4477 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:05:24.986684    4477 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:05:24.986725    4477 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:05:29.987285    4477 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:05:29.987338    4477 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:05:34.989582    4477 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:05:34.989748    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:05:35.001391    4477 logs.go:276] 1 containers: [61b3a63eaddc]
	I0803 18:05:35.001463    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:05:35.011474    4477 logs.go:276] 1 containers: [b561e504a901]
	I0803 18:05:35.011545    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:05:35.021635    4477 logs.go:276] 2 containers: [bcbb40889ca3 3d1437e6d6fc]
	I0803 18:05:35.021701    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:05:35.032158    4477 logs.go:276] 1 containers: [1d5103a7d136]
	I0803 18:05:35.032229    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:05:35.042526    4477 logs.go:276] 1 containers: [a422dda53c75]
	I0803 18:05:35.042595    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:05:35.056461    4477 logs.go:276] 1 containers: [01a42d8e56a8]
	I0803 18:05:35.056530    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:05:35.066666    4477 logs.go:276] 0 containers: []
	W0803 18:05:35.066679    4477 logs.go:278] No container was found matching "kindnet"
	I0803 18:05:35.066735    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:05:35.077759    4477 logs.go:276] 1 containers: [3318cf46c892]
	I0803 18:05:35.077774    4477 logs.go:123] Gathering logs for kube-proxy [a422dda53c75] ...
	I0803 18:05:35.077779    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a422dda53c75"
	I0803 18:05:35.089468    4477 logs.go:123] Gathering logs for kube-controller-manager [01a42d8e56a8] ...
	I0803 18:05:35.089478    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01a42d8e56a8"
	I0803 18:05:35.106871    4477 logs.go:123] Gathering logs for storage-provisioner [3318cf46c892] ...
	I0803 18:05:35.106884    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3318cf46c892"
	I0803 18:05:35.118967    4477 logs.go:123] Gathering logs for kubelet ...
	I0803 18:05:35.118977    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0803 18:05:35.138176    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:05:35.138269    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:05:35.154360    4477 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:05:35.154368    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:05:35.188716    4477 logs.go:123] Gathering logs for etcd [b561e504a901] ...
	I0803 18:05:35.188730    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b561e504a901"
	I0803 18:05:35.203061    4477 logs.go:123] Gathering logs for coredns [3d1437e6d6fc] ...
	I0803 18:05:35.203072    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d1437e6d6fc"
	I0803 18:05:35.214237    4477 logs.go:123] Gathering logs for kube-scheduler [1d5103a7d136] ...
	I0803 18:05:35.214250    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d5103a7d136"
	I0803 18:05:35.237825    4477 logs.go:123] Gathering logs for dmesg ...
	I0803 18:05:35.237839    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:05:35.242237    4477 logs.go:123] Gathering logs for kube-apiserver [61b3a63eaddc] ...
	I0803 18:05:35.242244    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61b3a63eaddc"
	I0803 18:05:35.256883    4477 logs.go:123] Gathering logs for coredns [bcbb40889ca3] ...
	I0803 18:05:35.256894    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcbb40889ca3"
	I0803 18:05:35.268415    4477 logs.go:123] Gathering logs for Docker ...
	I0803 18:05:35.268427    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:05:35.293638    4477 logs.go:123] Gathering logs for container status ...
	I0803 18:05:35.293645    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:05:35.304805    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:05:35.304817    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0803 18:05:35.304845    4477 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0803 18:05:35.304849    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	  Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:05:35.304861    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	  Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:05:35.304866    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:05:35.304868    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:05:45.308403    4477 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:05:50.309775    4477 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:05:50.309959    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:05:50.321754    4477 logs.go:276] 1 containers: [61b3a63eaddc]
	I0803 18:05:50.321834    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:05:50.332359    4477 logs.go:276] 1 containers: [b561e504a901]
	I0803 18:05:50.332422    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:05:50.342609    4477 logs.go:276] 2 containers: [bcbb40889ca3 3d1437e6d6fc]
	I0803 18:05:50.342667    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:05:50.352809    4477 logs.go:276] 1 containers: [1d5103a7d136]
	I0803 18:05:50.352869    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:05:50.363710    4477 logs.go:276] 1 containers: [a422dda53c75]
	I0803 18:05:50.363770    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:05:50.374460    4477 logs.go:276] 1 containers: [01a42d8e56a8]
	I0803 18:05:50.374537    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:05:50.384303    4477 logs.go:276] 0 containers: []
	W0803 18:05:50.384315    4477 logs.go:278] No container was found matching "kindnet"
	I0803 18:05:50.384373    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:05:50.394712    4477 logs.go:276] 1 containers: [3318cf46c892]
	I0803 18:05:50.394730    4477 logs.go:123] Gathering logs for kubelet ...
	I0803 18:05:50.394737    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0803 18:05:50.414385    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:05:50.414477    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:05:50.430211    4477 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:05:50.430218    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:05:50.464933    4477 logs.go:123] Gathering logs for coredns [3d1437e6d6fc] ...
	I0803 18:05:50.464945    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d1437e6d6fc"
	I0803 18:05:50.477002    4477 logs.go:123] Gathering logs for kube-scheduler [1d5103a7d136] ...
	I0803 18:05:50.477013    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d5103a7d136"
	I0803 18:05:50.499417    4477 logs.go:123] Gathering logs for container status ...
	I0803 18:05:50.499429    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:05:50.511797    4477 logs.go:123] Gathering logs for kube-controller-manager [01a42d8e56a8] ...
	I0803 18:05:50.511807    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01a42d8e56a8"
	I0803 18:05:50.529285    4477 logs.go:123] Gathering logs for storage-provisioner [3318cf46c892] ...
	I0803 18:05:50.529294    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3318cf46c892"
	I0803 18:05:50.540889    4477 logs.go:123] Gathering logs for Docker ...
	I0803 18:05:50.540897    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:05:50.566154    4477 logs.go:123] Gathering logs for dmesg ...
	I0803 18:05:50.566163    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:05:50.570781    4477 logs.go:123] Gathering logs for kube-apiserver [61b3a63eaddc] ...
	I0803 18:05:50.570790    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61b3a63eaddc"
	I0803 18:05:50.585445    4477 logs.go:123] Gathering logs for etcd [b561e504a901] ...
	I0803 18:05:50.585456    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b561e504a901"
	I0803 18:05:50.599506    4477 logs.go:123] Gathering logs for coredns [bcbb40889ca3] ...
	I0803 18:05:50.599520    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcbb40889ca3"
	I0803 18:05:50.611746    4477 logs.go:123] Gathering logs for kube-proxy [a422dda53c75] ...
	I0803 18:05:50.611758    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a422dda53c75"
	I0803 18:05:50.624234    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:05:50.624248    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0803 18:05:50.624273    4477 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0803 18:05:50.624277    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	  Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:05:50.624280    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	  Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:05:50.624285    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:05:50.624288    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:06:00.627831    4477 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:06:05.630101    4477 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:06:05.630529    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:06:05.666736    4477 logs.go:276] 1 containers: [61b3a63eaddc]
	I0803 18:06:05.666875    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:06:05.688075    4477 logs.go:276] 1 containers: [b561e504a901]
	I0803 18:06:05.688161    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:06:05.703102    4477 logs.go:276] 2 containers: [bcbb40889ca3 3d1437e6d6fc]
	I0803 18:06:05.703182    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:06:05.715212    4477 logs.go:276] 1 containers: [1d5103a7d136]
	I0803 18:06:05.715291    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:06:05.726041    4477 logs.go:276] 1 containers: [a422dda53c75]
	I0803 18:06:05.726111    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:06:05.736942    4477 logs.go:276] 1 containers: [01a42d8e56a8]
	I0803 18:06:05.737012    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:06:05.747548    4477 logs.go:276] 0 containers: []
	W0803 18:06:05.747557    4477 logs.go:278] No container was found matching "kindnet"
	I0803 18:06:05.747620    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:06:05.762483    4477 logs.go:276] 1 containers: [3318cf46c892]
	I0803 18:06:05.762498    4477 logs.go:123] Gathering logs for dmesg ...
	I0803 18:06:05.762504    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:06:05.767422    4477 logs.go:123] Gathering logs for etcd [b561e504a901] ...
	I0803 18:06:05.767429    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b561e504a901"
	I0803 18:06:05.781644    4477 logs.go:123] Gathering logs for coredns [bcbb40889ca3] ...
	I0803 18:06:05.781657    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcbb40889ca3"
	I0803 18:06:05.793295    4477 logs.go:123] Gathering logs for container status ...
	I0803 18:06:05.793306    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:06:05.805185    4477 logs.go:123] Gathering logs for storage-provisioner [3318cf46c892] ...
	I0803 18:06:05.805199    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3318cf46c892"
	I0803 18:06:05.817117    4477 logs.go:123] Gathering logs for kubelet ...
	I0803 18:06:05.817127    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0803 18:06:05.836888    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:06:05.836985    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:06:05.852607    4477 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:06:05.852614    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:06:05.887335    4477 logs.go:123] Gathering logs for kube-apiserver [61b3a63eaddc] ...
	I0803 18:06:05.887348    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61b3a63eaddc"
	I0803 18:06:05.901816    4477 logs.go:123] Gathering logs for coredns [3d1437e6d6fc] ...
	I0803 18:06:05.901831    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d1437e6d6fc"
	I0803 18:06:05.913937    4477 logs.go:123] Gathering logs for kube-scheduler [1d5103a7d136] ...
	I0803 18:06:05.913951    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d5103a7d136"
	I0803 18:06:05.929194    4477 logs.go:123] Gathering logs for kube-proxy [a422dda53c75] ...
	I0803 18:06:05.929204    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a422dda53c75"
	I0803 18:06:05.940735    4477 logs.go:123] Gathering logs for kube-controller-manager [01a42d8e56a8] ...
	I0803 18:06:05.940744    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01a42d8e56a8"
	I0803 18:06:05.958921    4477 logs.go:123] Gathering logs for Docker ...
	I0803 18:06:05.958930    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:06:05.982730    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:06:05.982742    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0803 18:06:05.982767    4477 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0803 18:06:05.982771    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	  Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:06:05.982775    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	  Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:06:05.982781    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:06:05.982784    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:06:15.986303    4477 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:06:20.988823    4477 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:06:20.989057    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:06:21.014090    4477 logs.go:276] 1 containers: [61b3a63eaddc]
	I0803 18:06:21.014207    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:06:21.031843    4477 logs.go:276] 1 containers: [b561e504a901]
	I0803 18:06:21.031922    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:06:21.044597    4477 logs.go:276] 2 containers: [bcbb40889ca3 3d1437e6d6fc]
	I0803 18:06:21.044672    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:06:21.055736    4477 logs.go:276] 1 containers: [1d5103a7d136]
	I0803 18:06:21.055804    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:06:21.066314    4477 logs.go:276] 1 containers: [a422dda53c75]
	I0803 18:06:21.066382    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:06:21.076508    4477 logs.go:276] 1 containers: [01a42d8e56a8]
	I0803 18:06:21.076570    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:06:21.086654    4477 logs.go:276] 0 containers: []
	W0803 18:06:21.086673    4477 logs.go:278] No container was found matching "kindnet"
	I0803 18:06:21.086731    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:06:21.096804    4477 logs.go:276] 1 containers: [3318cf46c892]
	I0803 18:06:21.096821    4477 logs.go:123] Gathering logs for kube-apiserver [61b3a63eaddc] ...
	I0803 18:06:21.096826    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61b3a63eaddc"
	I0803 18:06:21.111236    4477 logs.go:123] Gathering logs for etcd [b561e504a901] ...
	I0803 18:06:21.111246    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b561e504a901"
	I0803 18:06:21.125271    4477 logs.go:123] Gathering logs for coredns [3d1437e6d6fc] ...
	I0803 18:06:21.125282    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d1437e6d6fc"
	I0803 18:06:21.136748    4477 logs.go:123] Gathering logs for kube-scheduler [1d5103a7d136] ...
	I0803 18:06:21.136760    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d5103a7d136"
	I0803 18:06:21.153138    4477 logs.go:123] Gathering logs for kube-proxy [a422dda53c75] ...
	I0803 18:06:21.153155    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a422dda53c75"
	I0803 18:06:21.166437    4477 logs.go:123] Gathering logs for kube-controller-manager [01a42d8e56a8] ...
	I0803 18:06:21.166450    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01a42d8e56a8"
	I0803 18:06:21.192954    4477 logs.go:123] Gathering logs for kubelet ...
	I0803 18:06:21.192965    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0803 18:06:21.211506    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:06:21.211605    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:06:21.227425    4477 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:06:21.227431    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:06:21.262436    4477 logs.go:123] Gathering logs for coredns [bcbb40889ca3] ...
	I0803 18:06:21.262447    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcbb40889ca3"
	I0803 18:06:21.274883    4477 logs.go:123] Gathering logs for storage-provisioner [3318cf46c892] ...
	I0803 18:06:21.274893    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3318cf46c892"
	I0803 18:06:21.287260    4477 logs.go:123] Gathering logs for Docker ...
	I0803 18:06:21.287272    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:06:21.312181    4477 logs.go:123] Gathering logs for container status ...
	I0803 18:06:21.312191    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:06:21.323317    4477 logs.go:123] Gathering logs for dmesg ...
	I0803 18:06:21.323328    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:06:21.327597    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:06:21.327606    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0803 18:06:21.327630    4477 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0803 18:06:21.327634    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	  Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:06:21.327638    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	  Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:06:21.327642    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:06:21.327645    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:06:31.331552    4477 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:06:36.333705    4477 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0803 18:06:36.333817    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:06:36.345236    4477 logs.go:276] 1 containers: [61b3a63eaddc]
	I0803 18:06:36.345299    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:06:36.355983    4477 logs.go:276] 1 containers: [b561e504a901]
	I0803 18:06:36.356057    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:06:36.366719    4477 logs.go:276] 2 containers: [bcbb40889ca3 3d1437e6d6fc]
	I0803 18:06:36.366792    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:06:36.377147    4477 logs.go:276] 1 containers: [1d5103a7d136]
	I0803 18:06:36.377220    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:06:36.387215    4477 logs.go:276] 1 containers: [a422dda53c75]
	I0803 18:06:36.387278    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:06:36.397881    4477 logs.go:276] 1 containers: [01a42d8e56a8]
	I0803 18:06:36.397948    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:06:36.407766    4477 logs.go:276] 0 containers: []
	W0803 18:06:36.407780    4477 logs.go:278] No container was found matching "kindnet"
	I0803 18:06:36.407835    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:06:36.418507    4477 logs.go:276] 1 containers: [3318cf46c892]
	I0803 18:06:36.418527    4477 logs.go:123] Gathering logs for kubelet ...
	I0803 18:06:36.418532    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0803 18:06:36.436954    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:06:36.437048    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:06:36.452853    4477 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:06:36.452862    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:06:36.491706    4477 logs.go:123] Gathering logs for coredns [3d1437e6d6fc] ...
	I0803 18:06:36.491717    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d1437e6d6fc"
	I0803 18:06:36.503239    4477 logs.go:123] Gathering logs for kube-scheduler [1d5103a7d136] ...
	I0803 18:06:36.503252    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d5103a7d136"
	I0803 18:06:36.522289    4477 logs.go:123] Gathering logs for kube-controller-manager [01a42d8e56a8] ...
	I0803 18:06:36.522302    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01a42d8e56a8"
	I0803 18:06:36.539745    4477 logs.go:123] Gathering logs for Docker ...
	I0803 18:06:36.539756    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:06:36.564557    4477 logs.go:123] Gathering logs for dmesg ...
	I0803 18:06:36.564567    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:06:36.568869    4477 logs.go:123] Gathering logs for kube-apiserver [61b3a63eaddc] ...
	I0803 18:06:36.568879    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61b3a63eaddc"
	I0803 18:06:36.583416    4477 logs.go:123] Gathering logs for etcd [b561e504a901] ...
	I0803 18:06:36.583425    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b561e504a901"
	I0803 18:06:36.597508    4477 logs.go:123] Gathering logs for coredns [bcbb40889ca3] ...
	I0803 18:06:36.597519    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcbb40889ca3"
	I0803 18:06:36.612953    4477 logs.go:123] Gathering logs for kube-proxy [a422dda53c75] ...
	I0803 18:06:36.612969    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a422dda53c75"
	I0803 18:06:36.624544    4477 logs.go:123] Gathering logs for storage-provisioner [3318cf46c892] ...
	I0803 18:06:36.624556    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3318cf46c892"
	I0803 18:06:36.636423    4477 logs.go:123] Gathering logs for container status ...
	I0803 18:06:36.636442    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:06:36.647906    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:06:36.647917    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0803 18:06:36.647946    4477 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0803 18:06:36.647950    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	  Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:06:36.647954    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	  Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:06:36.647957    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:06:36.647961    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:06:46.651919    4477 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:06:51.652162    4477 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:06:51.652338    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:06:51.666559    4477 logs.go:276] 1 containers: [61b3a63eaddc]
	I0803 18:06:51.666644    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:06:51.678520    4477 logs.go:276] 1 containers: [b561e504a901]
	I0803 18:06:51.678594    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:06:51.689499    4477 logs.go:276] 4 containers: [03f5f9344fc4 ac564f34c2d8 bcbb40889ca3 3d1437e6d6fc]
	I0803 18:06:51.689577    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:06:51.699992    4477 logs.go:276] 1 containers: [1d5103a7d136]
	I0803 18:06:51.700061    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:06:51.716118    4477 logs.go:276] 1 containers: [a422dda53c75]
	I0803 18:06:51.716189    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:06:51.726870    4477 logs.go:276] 1 containers: [01a42d8e56a8]
	I0803 18:06:51.726944    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:06:51.737554    4477 logs.go:276] 0 containers: []
	W0803 18:06:51.737565    4477 logs.go:278] No container was found matching "kindnet"
	I0803 18:06:51.737622    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:06:51.748494    4477 logs.go:276] 1 containers: [3318cf46c892]
	I0803 18:06:51.748509    4477 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:06:51.748514    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:06:51.783595    4477 logs.go:123] Gathering logs for coredns [ac564f34c2d8] ...
	I0803 18:06:51.783610    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac564f34c2d8"
	I0803 18:06:51.794802    4477 logs.go:123] Gathering logs for kube-controller-manager [01a42d8e56a8] ...
	I0803 18:06:51.794816    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01a42d8e56a8"
	I0803 18:06:51.821594    4477 logs.go:123] Gathering logs for kubelet ...
	I0803 18:06:51.821606    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0803 18:06:51.839063    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:06:51.839154    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:06:51.854720    4477 logs.go:123] Gathering logs for coredns [bcbb40889ca3] ...
	I0803 18:06:51.854725    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcbb40889ca3"
	I0803 18:06:51.868020    4477 logs.go:123] Gathering logs for kube-proxy [a422dda53c75] ...
	I0803 18:06:51.868029    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a422dda53c75"
	I0803 18:06:51.880030    4477 logs.go:123] Gathering logs for container status ...
	I0803 18:06:51.880041    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:06:51.891678    4477 logs.go:123] Gathering logs for dmesg ...
	I0803 18:06:51.891690    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:06:51.896032    4477 logs.go:123] Gathering logs for kube-apiserver [61b3a63eaddc] ...
	I0803 18:06:51.896039    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61b3a63eaddc"
	I0803 18:06:51.910582    4477 logs.go:123] Gathering logs for coredns [03f5f9344fc4] ...
	I0803 18:06:51.910593    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03f5f9344fc4"
	I0803 18:06:51.922154    4477 logs.go:123] Gathering logs for coredns [3d1437e6d6fc] ...
	I0803 18:06:51.922167    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d1437e6d6fc"
	I0803 18:06:51.933996    4477 logs.go:123] Gathering logs for storage-provisioner [3318cf46c892] ...
	I0803 18:06:51.934006    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3318cf46c892"
	I0803 18:06:51.948007    4477 logs.go:123] Gathering logs for Docker ...
	I0803 18:06:51.948018    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:06:51.971776    4477 logs.go:123] Gathering logs for etcd [b561e504a901] ...
	I0803 18:06:51.971785    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b561e504a901"
	I0803 18:06:51.985408    4477 logs.go:123] Gathering logs for kube-scheduler [1d5103a7d136] ...
	I0803 18:06:51.985418    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d5103a7d136"
	I0803 18:06:52.000427    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:06:52.000438    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0803 18:06:52.000466    4477 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0803 18:06:52.000470    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	  Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:06:52.000474    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	  Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:06:52.000502    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:06:52.000506    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:07:02.003067    4477 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:07:07.005261    4477 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:07:07.005425    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:07:07.023725    4477 logs.go:276] 1 containers: [61b3a63eaddc]
	I0803 18:07:07.023810    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:07:07.037023    4477 logs.go:276] 1 containers: [b561e504a901]
	I0803 18:07:07.037097    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:07:07.049375    4477 logs.go:276] 4 containers: [03f5f9344fc4 ac564f34c2d8 bcbb40889ca3 3d1437e6d6fc]
	I0803 18:07:07.049442    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:07:07.059936    4477 logs.go:276] 1 containers: [1d5103a7d136]
	I0803 18:07:07.059996    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:07:07.070409    4477 logs.go:276] 1 containers: [a422dda53c75]
	I0803 18:07:07.070480    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:07:07.081317    4477 logs.go:276] 1 containers: [01a42d8e56a8]
	I0803 18:07:07.081380    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:07:07.098160    4477 logs.go:276] 0 containers: []
	W0803 18:07:07.098172    4477 logs.go:278] No container was found matching "kindnet"
	I0803 18:07:07.098233    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:07:07.108577    4477 logs.go:276] 1 containers: [3318cf46c892]
	I0803 18:07:07.108595    4477 logs.go:123] Gathering logs for dmesg ...
	I0803 18:07:07.108602    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:07:07.113521    4477 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:07:07.113528    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:07:07.148817    4477 logs.go:123] Gathering logs for kube-apiserver [61b3a63eaddc] ...
	I0803 18:07:07.148828    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61b3a63eaddc"
	I0803 18:07:07.163312    4477 logs.go:123] Gathering logs for coredns [ac564f34c2d8] ...
	I0803 18:07:07.163325    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac564f34c2d8"
	I0803 18:07:07.175544    4477 logs.go:123] Gathering logs for coredns [bcbb40889ca3] ...
	I0803 18:07:07.175557    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcbb40889ca3"
	I0803 18:07:07.186949    4477 logs.go:123] Gathering logs for kube-scheduler [1d5103a7d136] ...
	I0803 18:07:07.186958    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d5103a7d136"
	I0803 18:07:07.201917    4477 logs.go:123] Gathering logs for Docker ...
	I0803 18:07:07.201927    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:07:07.227975    4477 logs.go:123] Gathering logs for kubelet ...
	I0803 18:07:07.227983    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0803 18:07:07.247190    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:07:07.247281    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:07:07.263062    4477 logs.go:123] Gathering logs for coredns [03f5f9344fc4] ...
	I0803 18:07:07.263072    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03f5f9344fc4"
	I0803 18:07:07.275473    4477 logs.go:123] Gathering logs for kube-proxy [a422dda53c75] ...
	I0803 18:07:07.275484    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a422dda53c75"
	I0803 18:07:07.286953    4477 logs.go:123] Gathering logs for etcd [b561e504a901] ...
	I0803 18:07:07.286964    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b561e504a901"
	I0803 18:07:07.300763    4477 logs.go:123] Gathering logs for coredns [3d1437e6d6fc] ...
	I0803 18:07:07.300773    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d1437e6d6fc"
	I0803 18:07:07.312241    4477 logs.go:123] Gathering logs for kube-controller-manager [01a42d8e56a8] ...
	I0803 18:07:07.312253    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01a42d8e56a8"
	I0803 18:07:07.331858    4477 logs.go:123] Gathering logs for storage-provisioner [3318cf46c892] ...
	I0803 18:07:07.331869    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3318cf46c892"
	I0803 18:07:07.343319    4477 logs.go:123] Gathering logs for container status ...
	I0803 18:07:07.343330    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:07:07.355615    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:07:07.355629    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0803 18:07:07.355655    4477 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0803 18:07:07.355660    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	  Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:07:07.355664    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	  Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:07:07.355668    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:07:07.355671    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:07:17.358423    4477 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:07:22.360593    4477 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:07:22.360708    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:07:22.371698    4477 logs.go:276] 1 containers: [61b3a63eaddc]
	I0803 18:07:22.371772    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:07:22.381930    4477 logs.go:276] 1 containers: [b561e504a901]
	I0803 18:07:22.382003    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:07:22.393164    4477 logs.go:276] 4 containers: [03f5f9344fc4 ac564f34c2d8 bcbb40889ca3 3d1437e6d6fc]
	I0803 18:07:22.393233    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:07:22.403626    4477 logs.go:276] 1 containers: [1d5103a7d136]
	I0803 18:07:22.403700    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:07:22.413910    4477 logs.go:276] 1 containers: [a422dda53c75]
	I0803 18:07:22.413980    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:07:22.431210    4477 logs.go:276] 1 containers: [01a42d8e56a8]
	I0803 18:07:22.431276    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:07:22.441378    4477 logs.go:276] 0 containers: []
	W0803 18:07:22.441395    4477 logs.go:278] No container was found matching "kindnet"
	I0803 18:07:22.441453    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:07:22.451787    4477 logs.go:276] 1 containers: [3318cf46c892]
	I0803 18:07:22.451804    4477 logs.go:123] Gathering logs for dmesg ...
	I0803 18:07:22.451810    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:07:22.456720    4477 logs.go:123] Gathering logs for etcd [b561e504a901] ...
	I0803 18:07:22.456726    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b561e504a901"
	I0803 18:07:22.470428    4477 logs.go:123] Gathering logs for kube-scheduler [1d5103a7d136] ...
	I0803 18:07:22.470442    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d5103a7d136"
	I0803 18:07:22.485662    4477 logs.go:123] Gathering logs for kube-controller-manager [01a42d8e56a8] ...
	I0803 18:07:22.485678    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01a42d8e56a8"
	I0803 18:07:22.504706    4477 logs.go:123] Gathering logs for container status ...
	I0803 18:07:22.504717    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:07:22.517107    4477 logs.go:123] Gathering logs for coredns [bcbb40889ca3] ...
	I0803 18:07:22.517118    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcbb40889ca3"
	I0803 18:07:22.529206    4477 logs.go:123] Gathering logs for kube-proxy [a422dda53c75] ...
	I0803 18:07:22.529221    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a422dda53c75"
	I0803 18:07:22.542630    4477 logs.go:123] Gathering logs for coredns [03f5f9344fc4] ...
	I0803 18:07:22.542643    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03f5f9344fc4"
	I0803 18:07:22.554681    4477 logs.go:123] Gathering logs for storage-provisioner [3318cf46c892] ...
	I0803 18:07:22.554691    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3318cf46c892"
	I0803 18:07:22.566725    4477 logs.go:123] Gathering logs for Docker ...
	I0803 18:07:22.566735    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:07:22.591152    4477 logs.go:123] Gathering logs for kubelet ...
	I0803 18:07:22.591167    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0803 18:07:22.609323    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:07:22.609420    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:07:22.625192    4477 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:07:22.625198    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:07:22.662162    4477 logs.go:123] Gathering logs for kube-apiserver [61b3a63eaddc] ...
	I0803 18:07:22.662175    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61b3a63eaddc"
	I0803 18:07:22.681241    4477 logs.go:123] Gathering logs for coredns [ac564f34c2d8] ...
	I0803 18:07:22.681260    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac564f34c2d8"
	I0803 18:07:22.700273    4477 logs.go:123] Gathering logs for coredns [3d1437e6d6fc] ...
	I0803 18:07:22.700287    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d1437e6d6fc"
	I0803 18:07:22.715589    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:07:22.715603    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0803 18:07:22.715636    4477 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0803 18:07:22.715643    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	  Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:07:22.715647    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	  Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:07:22.715651    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:07:22.715654    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:07:32.719471    4477 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:07:37.721527    4477 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:07:37.721654    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:07:37.737700    4477 logs.go:276] 1 containers: [61b3a63eaddc]
	I0803 18:07:37.737773    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:07:37.754450    4477 logs.go:276] 1 containers: [b561e504a901]
	I0803 18:07:37.754528    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:07:37.770166    4477 logs.go:276] 4 containers: [03f5f9344fc4 ac564f34c2d8 bcbb40889ca3 3d1437e6d6fc]
	I0803 18:07:37.770242    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:07:37.786001    4477 logs.go:276] 1 containers: [1d5103a7d136]
	I0803 18:07:37.786082    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:07:37.796768    4477 logs.go:276] 1 containers: [a422dda53c75]
	I0803 18:07:37.796831    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:07:37.807787    4477 logs.go:276] 1 containers: [01a42d8e56a8]
	I0803 18:07:37.807854    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:07:37.818808    4477 logs.go:276] 0 containers: []
	W0803 18:07:37.818819    4477 logs.go:278] No container was found matching "kindnet"
	I0803 18:07:37.818880    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:07:37.830214    4477 logs.go:276] 1 containers: [3318cf46c892]
	I0803 18:07:37.830233    4477 logs.go:123] Gathering logs for coredns [ac564f34c2d8] ...
	I0803 18:07:37.830237    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac564f34c2d8"
	I0803 18:07:37.845045    4477 logs.go:123] Gathering logs for coredns [3d1437e6d6fc] ...
	I0803 18:07:37.845059    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d1437e6d6fc"
	I0803 18:07:37.857813    4477 logs.go:123] Gathering logs for etcd [b561e504a901] ...
	I0803 18:07:37.857826    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b561e504a901"
	I0803 18:07:37.872549    4477 logs.go:123] Gathering logs for kubelet ...
	I0803 18:07:37.872562    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0803 18:07:37.895090    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:07:37.895185    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:07:37.911589    4477 logs.go:123] Gathering logs for coredns [03f5f9344fc4] ...
	I0803 18:07:37.911603    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03f5f9344fc4"
	I0803 18:07:37.924320    4477 logs.go:123] Gathering logs for kube-proxy [a422dda53c75] ...
	I0803 18:07:37.924331    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a422dda53c75"
	I0803 18:07:37.937597    4477 logs.go:123] Gathering logs for Docker ...
	I0803 18:07:37.937609    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:07:37.962958    4477 logs.go:123] Gathering logs for container status ...
	I0803 18:07:37.962967    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:07:37.975554    4477 logs.go:123] Gathering logs for dmesg ...
	I0803 18:07:37.975570    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:07:37.980209    4477 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:07:37.980216    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:07:38.016074    4477 logs.go:123] Gathering logs for kube-apiserver [61b3a63eaddc] ...
	I0803 18:07:38.016087    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61b3a63eaddc"
	I0803 18:07:38.031750    4477 logs.go:123] Gathering logs for coredns [bcbb40889ca3] ...
	I0803 18:07:38.031762    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcbb40889ca3"
	I0803 18:07:38.048129    4477 logs.go:123] Gathering logs for kube-scheduler [1d5103a7d136] ...
	I0803 18:07:38.048143    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d5103a7d136"
	I0803 18:07:38.062988    4477 logs.go:123] Gathering logs for kube-controller-manager [01a42d8e56a8] ...
	I0803 18:07:38.063000    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01a42d8e56a8"
	I0803 18:07:38.082428    4477 logs.go:123] Gathering logs for storage-provisioner [3318cf46c892] ...
	I0803 18:07:38.082439    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3318cf46c892"
	I0803 18:07:38.094773    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:07:38.094785    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0803 18:07:38.094813    4477 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0803 18:07:38.094817    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	  Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:07:38.094844    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	  Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:07:38.094848    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:07:38.094851    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:07:48.098723    4477 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:07:53.100941    4477 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:07:53.101103    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:07:53.112192    4477 logs.go:276] 1 containers: [61b3a63eaddc]
	I0803 18:07:53.112277    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:07:53.123503    4477 logs.go:276] 1 containers: [b561e504a901]
	I0803 18:07:53.123575    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:07:53.134104    4477 logs.go:276] 4 containers: [03f5f9344fc4 ac564f34c2d8 bcbb40889ca3 3d1437e6d6fc]
	I0803 18:07:53.134180    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:07:53.145742    4477 logs.go:276] 1 containers: [1d5103a7d136]
	I0803 18:07:53.145810    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:07:53.156029    4477 logs.go:276] 1 containers: [a422dda53c75]
	I0803 18:07:53.156103    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:07:53.166398    4477 logs.go:276] 1 containers: [01a42d8e56a8]
	I0803 18:07:53.166461    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:07:53.176485    4477 logs.go:276] 0 containers: []
	W0803 18:07:53.176497    4477 logs.go:278] No container was found matching "kindnet"
	I0803 18:07:53.176559    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:07:53.187269    4477 logs.go:276] 1 containers: [3318cf46c892]
	I0803 18:07:53.187285    4477 logs.go:123] Gathering logs for Docker ...
	I0803 18:07:53.187292    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:07:53.210661    4477 logs.go:123] Gathering logs for coredns [03f5f9344fc4] ...
	I0803 18:07:53.210667    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03f5f9344fc4"
	I0803 18:07:53.222178    4477 logs.go:123] Gathering logs for coredns [3d1437e6d6fc] ...
	I0803 18:07:53.222190    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d1437e6d6fc"
	I0803 18:07:53.238222    4477 logs.go:123] Gathering logs for kube-apiserver [61b3a63eaddc] ...
	I0803 18:07:53.238233    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61b3a63eaddc"
	I0803 18:07:53.252491    4477 logs.go:123] Gathering logs for container status ...
	I0803 18:07:53.252501    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:07:53.264486    4477 logs.go:123] Gathering logs for coredns [ac564f34c2d8] ...
	I0803 18:07:53.264497    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac564f34c2d8"
	I0803 18:07:53.276442    4477 logs.go:123] Gathering logs for coredns [bcbb40889ca3] ...
	I0803 18:07:53.276454    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcbb40889ca3"
	I0803 18:07:53.287685    4477 logs.go:123] Gathering logs for kube-scheduler [1d5103a7d136] ...
	I0803 18:07:53.287697    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d5103a7d136"
	I0803 18:07:53.302786    4477 logs.go:123] Gathering logs for storage-provisioner [3318cf46c892] ...
	I0803 18:07:53.302799    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3318cf46c892"
	I0803 18:07:53.314438    4477 logs.go:123] Gathering logs for kubelet ...
	I0803 18:07:53.314451    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0803 18:07:53.332068    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:07:53.332159    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:07:53.347787    4477 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:07:53.347794    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:07:53.382604    4477 logs.go:123] Gathering logs for kube-proxy [a422dda53c75] ...
	I0803 18:07:53.382614    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a422dda53c75"
	I0803 18:07:53.394996    4477 logs.go:123] Gathering logs for kube-controller-manager [01a42d8e56a8] ...
	I0803 18:07:53.395010    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01a42d8e56a8"
	I0803 18:07:53.418248    4477 logs.go:123] Gathering logs for dmesg ...
	I0803 18:07:53.418263    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:07:53.423398    4477 logs.go:123] Gathering logs for etcd [b561e504a901] ...
	I0803 18:07:53.423407    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b561e504a901"
	I0803 18:07:53.442780    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:07:53.442792    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0803 18:07:53.442819    4477 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0803 18:07:53.442825    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	  Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:07:53.442830    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	  Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:07:53.442842    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:07:53.442845    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:08:03.445253    4477 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:08:08.445518    4477 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:08:08.445587    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:08:08.456657    4477 logs.go:276] 1 containers: [61b3a63eaddc]
	I0803 18:08:08.456724    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:08:08.467476    4477 logs.go:276] 1 containers: [b561e504a901]
	I0803 18:08:08.467543    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:08:08.478739    4477 logs.go:276] 4 containers: [03f5f9344fc4 ac564f34c2d8 bcbb40889ca3 3d1437e6d6fc]
	I0803 18:08:08.478809    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:08:08.493487    4477 logs.go:276] 1 containers: [1d5103a7d136]
	I0803 18:08:08.493561    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:08:08.505562    4477 logs.go:276] 1 containers: [a422dda53c75]
	I0803 18:08:08.505630    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:08:08.516989    4477 logs.go:276] 1 containers: [01a42d8e56a8]
	I0803 18:08:08.517060    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:08:08.527922    4477 logs.go:276] 0 containers: []
	W0803 18:08:08.527934    4477 logs.go:278] No container was found matching "kindnet"
	I0803 18:08:08.527992    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:08:08.543035    4477 logs.go:276] 1 containers: [3318cf46c892]
	I0803 18:08:08.543068    4477 logs.go:123] Gathering logs for coredns [ac564f34c2d8] ...
	I0803 18:08:08.543074    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac564f34c2d8"
	I0803 18:08:08.556768    4477 logs.go:123] Gathering logs for kubelet ...
	I0803 18:08:08.556780    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0803 18:08:08.573932    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:08:08.574025    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:08:08.590288    4477 logs.go:123] Gathering logs for dmesg ...
	I0803 18:08:08.590304    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:08:08.595099    4477 logs.go:123] Gathering logs for kube-apiserver [61b3a63eaddc] ...
	I0803 18:08:08.595106    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61b3a63eaddc"
	I0803 18:08:08.609954    4477 logs.go:123] Gathering logs for etcd [b561e504a901] ...
	I0803 18:08:08.609965    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b561e504a901"
	I0803 18:08:08.625453    4477 logs.go:123] Gathering logs for kube-scheduler [1d5103a7d136] ...
	I0803 18:08:08.625468    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d5103a7d136"
	I0803 18:08:08.643573    4477 logs.go:123] Gathering logs for storage-provisioner [3318cf46c892] ...
	I0803 18:08:08.643587    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3318cf46c892"
	I0803 18:08:08.657032    4477 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:08:08.657042    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:08:08.696450    4477 logs.go:123] Gathering logs for coredns [bcbb40889ca3] ...
	I0803 18:08:08.696462    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcbb40889ca3"
	I0803 18:08:08.708850    4477 logs.go:123] Gathering logs for coredns [3d1437e6d6fc] ...
	I0803 18:08:08.708863    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d1437e6d6fc"
	I0803 18:08:08.722484    4477 logs.go:123] Gathering logs for container status ...
	I0803 18:08:08.722498    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:08:08.734924    4477 logs.go:123] Gathering logs for coredns [03f5f9344fc4] ...
	I0803 18:08:08.734936    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03f5f9344fc4"
	I0803 18:08:08.748121    4477 logs.go:123] Gathering logs for kube-proxy [a422dda53c75] ...
	I0803 18:08:08.748132    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a422dda53c75"
	I0803 18:08:08.760684    4477 logs.go:123] Gathering logs for kube-controller-manager [01a42d8e56a8] ...
	I0803 18:08:08.760697    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01a42d8e56a8"
	I0803 18:08:08.783376    4477 logs.go:123] Gathering logs for Docker ...
	I0803 18:08:08.783391    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:08:08.816644    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:08:08.816667    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0803 18:08:08.816709    4477 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0803 18:08:08.816714    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	  Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:08:08.816718    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	  Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:08:08.816722    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:08:08.816725    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:08:18.819504    4477 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:08:23.821631    4477 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:08:23.821806    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:08:23.839192    4477 logs.go:276] 1 containers: [61b3a63eaddc]
	I0803 18:08:23.839279    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:08:23.851930    4477 logs.go:276] 1 containers: [b561e504a901]
	I0803 18:08:23.852004    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:08:23.864049    4477 logs.go:276] 4 containers: [03f5f9344fc4 ac564f34c2d8 bcbb40889ca3 3d1437e6d6fc]
	I0803 18:08:23.864122    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:08:23.875384    4477 logs.go:276] 1 containers: [1d5103a7d136]
	I0803 18:08:23.875449    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:08:23.886121    4477 logs.go:276] 1 containers: [a422dda53c75]
	I0803 18:08:23.886189    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:08:23.897260    4477 logs.go:276] 1 containers: [01a42d8e56a8]
	I0803 18:08:23.897325    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:08:23.907520    4477 logs.go:276] 0 containers: []
	W0803 18:08:23.907530    4477 logs.go:278] No container was found matching "kindnet"
	I0803 18:08:23.907585    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:08:23.917863    4477 logs.go:276] 1 containers: [3318cf46c892]
	I0803 18:08:23.917879    4477 logs.go:123] Gathering logs for container status ...
	I0803 18:08:23.917885    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:08:23.930053    4477 logs.go:123] Gathering logs for dmesg ...
	I0803 18:08:23.930066    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:08:23.935055    4477 logs.go:123] Gathering logs for etcd [b561e504a901] ...
	I0803 18:08:23.935061    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b561e504a901"
	I0803 18:08:23.949035    4477 logs.go:123] Gathering logs for coredns [03f5f9344fc4] ...
	I0803 18:08:23.949045    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03f5f9344fc4"
	I0803 18:08:23.965171    4477 logs.go:123] Gathering logs for coredns [ac564f34c2d8] ...
	I0803 18:08:23.965181    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac564f34c2d8"
	I0803 18:08:23.976970    4477 logs.go:123] Gathering logs for kube-proxy [a422dda53c75] ...
	I0803 18:08:23.976983    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a422dda53c75"
	I0803 18:08:23.989051    4477 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:08:23.989064    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:08:24.036280    4477 logs.go:123] Gathering logs for kube-apiserver [61b3a63eaddc] ...
	I0803 18:08:24.036296    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61b3a63eaddc"
	I0803 18:08:24.060243    4477 logs.go:123] Gathering logs for coredns [3d1437e6d6fc] ...
	I0803 18:08:24.060255    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d1437e6d6fc"
	I0803 18:08:24.072762    4477 logs.go:123] Gathering logs for kube-controller-manager [01a42d8e56a8] ...
	I0803 18:08:24.072774    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01a42d8e56a8"
	I0803 18:08:24.090431    4477 logs.go:123] Gathering logs for kubelet ...
	I0803 18:08:24.090444    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0803 18:08:24.108917    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:08:24.109008    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:08:24.124666    4477 logs.go:123] Gathering logs for kube-scheduler [1d5103a7d136] ...
	I0803 18:08:24.124671    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d5103a7d136"
	I0803 18:08:24.139411    4477 logs.go:123] Gathering logs for coredns [bcbb40889ca3] ...
	I0803 18:08:24.139426    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcbb40889ca3"
	I0803 18:08:24.151542    4477 logs.go:123] Gathering logs for storage-provisioner [3318cf46c892] ...
	I0803 18:08:24.151555    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3318cf46c892"
	I0803 18:08:24.162774    4477 logs.go:123] Gathering logs for Docker ...
	I0803 18:08:24.162787    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:08:24.186806    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:08:24.186815    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0803 18:08:24.186838    4477 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0803 18:08:24.186842    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	  Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:08:24.186845    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	  Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:08:24.186849    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:08:24.186863    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:08:34.190724    4477 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:08:39.192913    4477 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:08:39.197259    4477 out.go:177] 
	W0803 18:08:39.200287    4477 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0803 18:08:39.200292    4477 out.go:239] * 
	* 
	W0803 18:08:39.200749    4477 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 18:08:39.212157    4477 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-359000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-08-03 18:08:39.280846 -0700 PDT m=+2924.525271043
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-359000 -n running-upgrade-359000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-359000 -n running-upgrade-359000: exit status 2 (15.642165917s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-359000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-711000          | force-systemd-flag-711000 | jenkins | v1.33.1 | 03 Aug 24 17:58 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-336000              | force-systemd-env-336000  | jenkins | v1.33.1 | 03 Aug 24 17:58 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-336000           | force-systemd-env-336000  | jenkins | v1.33.1 | 03 Aug 24 17:58 PDT | 03 Aug 24 17:58 PDT |
	| start   | -p docker-flags-144000                | docker-flags-144000       | jenkins | v1.33.1 | 03 Aug 24 17:58 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-711000             | force-systemd-flag-711000 | jenkins | v1.33.1 | 03 Aug 24 17:59 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-711000          | force-systemd-flag-711000 | jenkins | v1.33.1 | 03 Aug 24 17:59 PDT | 03 Aug 24 17:59 PDT |
	| start   | -p cert-expiration-170000             | cert-expiration-170000    | jenkins | v1.33.1 | 03 Aug 24 17:59 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-144000 ssh               | docker-flags-144000       | jenkins | v1.33.1 | 03 Aug 24 17:59 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-144000 ssh               | docker-flags-144000       | jenkins | v1.33.1 | 03 Aug 24 17:59 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-144000                | docker-flags-144000       | jenkins | v1.33.1 | 03 Aug 24 17:59 PDT | 03 Aug 24 17:59 PDT |
	| start   | -p cert-options-356000                | cert-options-356000       | jenkins | v1.33.1 | 03 Aug 24 17:59 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-356000 ssh               | cert-options-356000       | jenkins | v1.33.1 | 03 Aug 24 17:59 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-356000 -- sudo        | cert-options-356000       | jenkins | v1.33.1 | 03 Aug 24 17:59 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-356000                | cert-options-356000       | jenkins | v1.33.1 | 03 Aug 24 17:59 PDT | 03 Aug 24 17:59 PDT |
	| start   | -p running-upgrade-359000             | minikube                  | jenkins | v1.26.0 | 03 Aug 24 17:59 PDT | 03 Aug 24 18:00 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-359000             | running-upgrade-359000    | jenkins | v1.33.1 | 03 Aug 24 18:00 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-170000             | cert-expiration-170000    | jenkins | v1.33.1 | 03 Aug 24 18:02 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-170000             | cert-expiration-170000    | jenkins | v1.33.1 | 03 Aug 24 18:02 PDT | 03 Aug 24 18:02 PDT |
	| start   | -p kubernetes-upgrade-366000          | kubernetes-upgrade-366000 | jenkins | v1.33.1 | 03 Aug 24 18:02 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-366000          | kubernetes-upgrade-366000 | jenkins | v1.33.1 | 03 Aug 24 18:02 PDT | 03 Aug 24 18:02 PDT |
	| start   | -p kubernetes-upgrade-366000          | kubernetes-upgrade-366000 | jenkins | v1.33.1 | 03 Aug 24 18:02 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0     |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-366000          | kubernetes-upgrade-366000 | jenkins | v1.33.1 | 03 Aug 24 18:02 PDT | 03 Aug 24 18:02 PDT |
	| start   | -p stopped-upgrade-413000             | minikube                  | jenkins | v1.26.0 | 03 Aug 24 18:02 PDT | 03 Aug 24 18:03 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-413000 stop           | minikube                  | jenkins | v1.26.0 | 03 Aug 24 18:03 PDT | 03 Aug 24 18:03 PDT |
	| start   | -p stopped-upgrade-413000             | stopped-upgrade-413000    | jenkins | v1.33.1 | 03 Aug 24 18:03 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/03 18:03:28
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0803 18:03:28.010602    4630 out.go:291] Setting OutFile to fd 1 ...
	I0803 18:03:28.010796    4630 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:03:28.010800    4630 out.go:304] Setting ErrFile to fd 2...
	I0803 18:03:28.010803    4630 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:03:28.010981    4630 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1166/.minikube/bin
	I0803 18:03:28.012131    4630 out.go:298] Setting JSON to false
	I0803 18:03:28.031469    4630 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3772,"bootTime":1722729636,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 18:03:28.031550    4630 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 18:03:28.036816    4630 out.go:177] * [stopped-upgrade-413000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0803 18:03:28.043743    4630 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 18:03:28.043806    4630 notify.go:220] Checking for updates...
	I0803 18:03:28.051778    4630 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	I0803 18:03:28.054730    4630 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 18:03:28.057785    4630 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 18:03:28.060785    4630 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	I0803 18:03:28.063787    4630 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 18:03:28.067013    4630 config.go:182] Loaded profile config "stopped-upgrade-413000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0803 18:03:28.069744    4630 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0803 18:03:28.072741    4630 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 18:03:28.076784    4630 out.go:177] * Using the qemu2 driver based on existing profile
	I0803 18:03:28.082734    4630 start.go:297] selected driver: qemu2
	I0803 18:03:28.082739    4630 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-413000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50497 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-413000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0803 18:03:28.082788    4630 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 18:03:28.085412    4630 cni.go:84] Creating CNI manager for ""
	I0803 18:03:28.085430    4630 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0803 18:03:28.085456    4630 start.go:340] cluster config:
	{Name:stopped-upgrade-413000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50497 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-413000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0803 18:03:28.085509    4630 iso.go:125] acquiring lock: {Name:mk5fb439c48468a4ea1254631a9a137d32075713 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 18:03:28.093755    4630 out.go:177] * Starting "stopped-upgrade-413000" primary control-plane node in "stopped-upgrade-413000" cluster
	I0803 18:03:28.097738    4630 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0803 18:03:28.097756    4630 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0803 18:03:28.097768    4630 cache.go:56] Caching tarball of preloaded images
	I0803 18:03:28.097826    4630 preload.go:172] Found /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0803 18:03:28.097831    4630 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0803 18:03:28.097892    4630 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/stopped-upgrade-413000/config.json ...
	I0803 18:03:28.098320    4630 start.go:360] acquireMachinesLock for stopped-upgrade-413000: {Name:mk8118ba0191912f8ddd157ebc629e1eb9051406 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 18:03:28.098355    4630 start.go:364] duration metric: took 28.292µs to acquireMachinesLock for "stopped-upgrade-413000"
	I0803 18:03:28.098365    4630 start.go:96] Skipping create...Using existing machine configuration
	I0803 18:03:28.098370    4630 fix.go:54] fixHost starting: 
	I0803 18:03:28.098487    4630 fix.go:112] recreateIfNeeded on stopped-upgrade-413000: state=Stopped err=<nil>
	W0803 18:03:28.098495    4630 fix.go:138] unexpected machine state, will restart: <nil>
	I0803 18:03:28.106742    4630 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-413000" ...
	I0803 18:03:26.171482    4477 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:03:26.171617    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:03:26.187105    4477 logs.go:276] 2 containers: [9e6227842e53 0b82634b0f3a]
	I0803 18:03:26.187187    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:03:26.199171    4477 logs.go:276] 2 containers: [3150a8cd7259 4b4d796858e2]
	I0803 18:03:26.199241    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:03:26.210097    4477 logs.go:276] 1 containers: [837106f253fc]
	I0803 18:03:26.210167    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:03:26.220892    4477 logs.go:276] 2 containers: [910358cdfc3a eb369f588964]
	I0803 18:03:26.220965    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:03:26.231045    4477 logs.go:276] 1 containers: [95d7841031fe]
	I0803 18:03:26.231115    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:03:26.241637    4477 logs.go:276] 2 containers: [2a2fca0a39b4 df62fe6ae6da]
	I0803 18:03:26.241705    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:03:26.251721    4477 logs.go:276] 0 containers: []
	W0803 18:03:26.251732    4477 logs.go:278] No container was found matching "kindnet"
	I0803 18:03:26.251787    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:03:26.262049    4477 logs.go:276] 2 containers: [e2fb415b1036 0913da792538]
	I0803 18:03:26.262067    4477 logs.go:123] Gathering logs for etcd [4b4d796858e2] ...
	I0803 18:03:26.262073    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b4d796858e2"
	I0803 18:03:26.276517    4477 logs.go:123] Gathering logs for kube-controller-manager [2a2fca0a39b4] ...
	I0803 18:03:26.276525    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a2fca0a39b4"
	I0803 18:03:26.293925    4477 logs.go:123] Gathering logs for Docker ...
	I0803 18:03:26.293936    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:03:26.318451    4477 logs.go:123] Gathering logs for kube-scheduler [eb369f588964] ...
	I0803 18:03:26.318460    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb369f588964"
	I0803 18:03:26.333011    4477 logs.go:123] Gathering logs for kube-proxy [95d7841031fe] ...
	I0803 18:03:26.333024    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95d7841031fe"
	I0803 18:03:26.344988    4477 logs.go:123] Gathering logs for storage-provisioner [0913da792538] ...
	I0803 18:03:26.345002    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0913da792538"
	I0803 18:03:26.356243    4477 logs.go:123] Gathering logs for dmesg ...
	I0803 18:03:26.356256    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:03:26.360719    4477 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:03:26.360729    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:03:26.395260    4477 logs.go:123] Gathering logs for kube-apiserver [9e6227842e53] ...
	I0803 18:03:26.395276    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e6227842e53"
	I0803 18:03:26.410059    4477 logs.go:123] Gathering logs for kube-apiserver [0b82634b0f3a] ...
	I0803 18:03:26.410070    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b82634b0f3a"
	I0803 18:03:26.434259    4477 logs.go:123] Gathering logs for kubelet ...
	I0803 18:03:26.434272    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0803 18:03:26.472170    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:03:26.472265    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:03:26.472861    4477 logs.go:123] Gathering logs for coredns [837106f253fc] ...
	I0803 18:03:26.472866    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 837106f253fc"
	I0803 18:03:26.486040    4477 logs.go:123] Gathering logs for kube-scheduler [910358cdfc3a] ...
	I0803 18:03:26.486053    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 910358cdfc3a"
	I0803 18:03:26.497546    4477 logs.go:123] Gathering logs for etcd [3150a8cd7259] ...
	I0803 18:03:26.497557    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3150a8cd7259"
	I0803 18:03:26.511278    4477 logs.go:123] Gathering logs for kube-controller-manager [df62fe6ae6da] ...
	I0803 18:03:26.511289    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df62fe6ae6da"
	I0803 18:03:26.525726    4477 logs.go:123] Gathering logs for storage-provisioner [e2fb415b1036] ...
	I0803 18:03:26.525738    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2fb415b1036"
	I0803 18:03:26.537835    4477 logs.go:123] Gathering logs for container status ...
	I0803 18:03:26.537846    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:03:26.550422    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:03:26.550433    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0803 18:03:26.550459    4477 out.go:239] X Problems detected in kubelet:
	W0803 18:03:26.550464    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:03:26.550467    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:03:26.550472    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:03:26.550508    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:03:28.110752    4630 qemu.go:418] Using hvf for hardware acceleration
	I0803 18:03:28.110824    4630 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/stopped-upgrade-413000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/stopped-upgrade-413000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/stopped-upgrade-413000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50464-:22,hostfwd=tcp::50465-:2376,hostname=stopped-upgrade-413000 -daemonize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/stopped-upgrade-413000/disk.qcow2
	I0803 18:03:28.159042    4630 main.go:141] libmachine: STDOUT: 
	I0803 18:03:28.159066    4630 main.go:141] libmachine: STDERR: 
	I0803 18:03:28.159072    4630 main.go:141] libmachine: Waiting for VM to start (ssh -p 50464 docker@127.0.0.1)...
	I0803 18:03:36.553825    4477 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:03:41.555944    4477 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:03:41.556060    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:03:41.567642    4477 logs.go:276] 2 containers: [9e6227842e53 0b82634b0f3a]
	I0803 18:03:41.567723    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:03:41.578635    4477 logs.go:276] 2 containers: [3150a8cd7259 4b4d796858e2]
	I0803 18:03:41.578704    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:03:41.590400    4477 logs.go:276] 1 containers: [837106f253fc]
	I0803 18:03:41.590473    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:03:41.601975    4477 logs.go:276] 2 containers: [910358cdfc3a eb369f588964]
	I0803 18:03:41.602050    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:03:41.613144    4477 logs.go:276] 1 containers: [95d7841031fe]
	I0803 18:03:41.613210    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:03:41.624277    4477 logs.go:276] 2 containers: [2a2fca0a39b4 df62fe6ae6da]
	I0803 18:03:41.624343    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:03:41.634179    4477 logs.go:276] 0 containers: []
	W0803 18:03:41.634189    4477 logs.go:278] No container was found matching "kindnet"
	I0803 18:03:41.634240    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:03:41.644848    4477 logs.go:276] 2 containers: [e2fb415b1036 0913da792538]
	I0803 18:03:41.644867    4477 logs.go:123] Gathering logs for kube-proxy [95d7841031fe] ...
	I0803 18:03:41.644873    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95d7841031fe"
	I0803 18:03:41.657280    4477 logs.go:123] Gathering logs for storage-provisioner [0913da792538] ...
	I0803 18:03:41.657293    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0913da792538"
	I0803 18:03:41.673767    4477 logs.go:123] Gathering logs for Docker ...
	I0803 18:03:41.673779    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:03:41.699026    4477 logs.go:123] Gathering logs for dmesg ...
	I0803 18:03:41.699033    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:03:41.703236    4477 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:03:41.703242    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:03:41.738427    4477 logs.go:123] Gathering logs for container status ...
	I0803 18:03:41.738439    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:03:41.750464    4477 logs.go:123] Gathering logs for kube-apiserver [9e6227842e53] ...
	I0803 18:03:41.750478    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e6227842e53"
	I0803 18:03:41.765038    4477 logs.go:123] Gathering logs for kube-apiserver [0b82634b0f3a] ...
	I0803 18:03:41.765049    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b82634b0f3a"
	I0803 18:03:41.788034    4477 logs.go:123] Gathering logs for coredns [837106f253fc] ...
	I0803 18:03:41.788044    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 837106f253fc"
	I0803 18:03:41.799603    4477 logs.go:123] Gathering logs for kube-scheduler [910358cdfc3a] ...
	I0803 18:03:41.799617    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 910358cdfc3a"
	I0803 18:03:41.811531    4477 logs.go:123] Gathering logs for kube-scheduler [eb369f588964] ...
	I0803 18:03:41.811543    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb369f588964"
	I0803 18:03:41.826230    4477 logs.go:123] Gathering logs for kube-controller-manager [df62fe6ae6da] ...
	I0803 18:03:41.826243    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df62fe6ae6da"
	I0803 18:03:41.844076    4477 logs.go:123] Gathering logs for kubelet ...
	I0803 18:03:41.844089    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0803 18:03:41.880549    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:03:41.880641    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:03:41.881238    4477 logs.go:123] Gathering logs for etcd [3150a8cd7259] ...
	I0803 18:03:41.881243    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3150a8cd7259"
	I0803 18:03:41.896218    4477 logs.go:123] Gathering logs for etcd [4b4d796858e2] ...
	I0803 18:03:41.896233    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b4d796858e2"
	I0803 18:03:41.911642    4477 logs.go:123] Gathering logs for kube-controller-manager [2a2fca0a39b4] ...
	I0803 18:03:41.911654    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a2fca0a39b4"
	I0803 18:03:41.929600    4477 logs.go:123] Gathering logs for storage-provisioner [e2fb415b1036] ...
	I0803 18:03:41.929614    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2fb415b1036"
	I0803 18:03:41.950427    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:03:41.950437    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0803 18:03:41.950466    4477 out.go:239] X Problems detected in kubelet:
	W0803 18:03:41.950469    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:03:41.950473    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:03:41.950476    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:03:41.950479    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:03:48.646268    4630 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/stopped-upgrade-413000/config.json ...
	I0803 18:03:48.647024    4630 machine.go:94] provisionDockerMachine start ...
	I0803 18:03:48.647193    4630 main.go:141] libmachine: Using SSH client type: native
	I0803 18:03:48.647672    4630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10060aa10] 0x10060d270 <nil>  [] 0s} localhost 50464 <nil> <nil>}
	I0803 18:03:48.647685    4630 main.go:141] libmachine: About to run SSH command:
	hostname
	I0803 18:03:48.731080    4630 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0803 18:03:48.731114    4630 buildroot.go:166] provisioning hostname "stopped-upgrade-413000"
	I0803 18:03:48.731249    4630 main.go:141] libmachine: Using SSH client type: native
	I0803 18:03:48.731516    4630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10060aa10] 0x10060d270 <nil>  [] 0s} localhost 50464 <nil> <nil>}
	I0803 18:03:48.731527    4630 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-413000 && echo "stopped-upgrade-413000" | sudo tee /etc/hostname
	I0803 18:03:48.809412    4630 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-413000
	
	I0803 18:03:48.809496    4630 main.go:141] libmachine: Using SSH client type: native
	I0803 18:03:48.809670    4630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10060aa10] 0x10060d270 <nil>  [] 0s} localhost 50464 <nil> <nil>}
	I0803 18:03:48.809683    4630 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-413000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-413000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-413000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0803 18:03:48.881428    4630 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0803 18:03:48.881441    4630 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19364-1166/.minikube CaCertPath:/Users/jenkins/minikube-integration/19364-1166/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19364-1166/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19364-1166/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19364-1166/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19364-1166/.minikube}
	I0803 18:03:48.881449    4630 buildroot.go:174] setting up certificates
	I0803 18:03:48.881455    4630 provision.go:84] configureAuth start
	I0803 18:03:48.881461    4630 provision.go:143] copyHostCerts
	I0803 18:03:48.881545    4630 exec_runner.go:144] found /Users/jenkins/minikube-integration/19364-1166/.minikube/ca.pem, removing ...
	I0803 18:03:48.881553    4630 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19364-1166/.minikube/ca.pem
	I0803 18:03:48.881725    4630 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19364-1166/.minikube/ca.pem (1082 bytes)
	I0803 18:03:48.881950    4630 exec_runner.go:144] found /Users/jenkins/minikube-integration/19364-1166/.minikube/cert.pem, removing ...
	I0803 18:03:48.881955    4630 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19364-1166/.minikube/cert.pem
	I0803 18:03:48.882023    4630 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19364-1166/.minikube/cert.pem (1123 bytes)
	I0803 18:03:48.882163    4630 exec_runner.go:144] found /Users/jenkins/minikube-integration/19364-1166/.minikube/key.pem, removing ...
	I0803 18:03:48.882167    4630 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19364-1166/.minikube/key.pem
	I0803 18:03:48.882236    4630 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19364-1166/.minikube/key.pem (1675 bytes)
	I0803 18:03:48.882346    4630 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19364-1166/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19364-1166/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-413000 san=[127.0.0.1 localhost minikube stopped-upgrade-413000]
	I0803 18:03:48.982049    4630 provision.go:177] copyRemoteCerts
	I0803 18:03:48.982095    4630 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0803 18:03:48.982104    4630 sshutil.go:53] new ssh client: &{IP:localhost Port:50464 SSHKeyPath:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/stopped-upgrade-413000/id_rsa Username:docker}
	I0803 18:03:49.015468    4630 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0803 18:03:49.022285    4630 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0803 18:03:49.028689    4630 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0803 18:03:49.035887    4630 provision.go:87] duration metric: took 154.430333ms to configureAuth
	I0803 18:03:49.035895    4630 buildroot.go:189] setting minikube options for container-runtime
	I0803 18:03:49.036007    4630 config.go:182] Loaded profile config "stopped-upgrade-413000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0803 18:03:49.036040    4630 main.go:141] libmachine: Using SSH client type: native
	I0803 18:03:49.036147    4630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10060aa10] 0x10060d270 <nil>  [] 0s} localhost 50464 <nil> <nil>}
	I0803 18:03:49.036156    4630 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0803 18:03:49.097984    4630 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0803 18:03:49.097999    4630 buildroot.go:70] root file system type: tmpfs
	I0803 18:03:49.098051    4630 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0803 18:03:49.098110    4630 main.go:141] libmachine: Using SSH client type: native
	I0803 18:03:49.098230    4630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10060aa10] 0x10060d270 <nil>  [] 0s} localhost 50464 <nil> <nil>}
	I0803 18:03:49.098263    4630 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0803 18:03:49.161499    4630 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0803 18:03:49.161555    4630 main.go:141] libmachine: Using SSH client type: native
	I0803 18:03:49.161671    4630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10060aa10] 0x10060d270 <nil>  [] 0s} localhost 50464 <nil> <nil>}
	I0803 18:03:49.161680    4630 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0803 18:03:49.529432    4630 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0803 18:03:49.529446    4630 machine.go:97] duration metric: took 882.434709ms to provisionDockerMachine
	I0803 18:03:49.529453    4630 start.go:293] postStartSetup for "stopped-upgrade-413000" (driver="qemu2")
	I0803 18:03:49.529460    4630 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0803 18:03:49.529522    4630 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0803 18:03:49.529532    4630 sshutil.go:53] new ssh client: &{IP:localhost Port:50464 SSHKeyPath:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/stopped-upgrade-413000/id_rsa Username:docker}
	I0803 18:03:49.564102    4630 ssh_runner.go:195] Run: cat /etc/os-release
	I0803 18:03:49.565324    4630 info.go:137] Remote host: Buildroot 2021.02.12
	I0803 18:03:49.565332    4630 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19364-1166/.minikube/addons for local assets ...
	I0803 18:03:49.565414    4630 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19364-1166/.minikube/files for local assets ...
	I0803 18:03:49.565557    4630 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19364-1166/.minikube/files/etc/ssl/certs/16732.pem -> 16732.pem in /etc/ssl/certs
	I0803 18:03:49.565683    4630 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0803 18:03:49.568488    4630 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1166/.minikube/files/etc/ssl/certs/16732.pem --> /etc/ssl/certs/16732.pem (1708 bytes)
	I0803 18:03:49.575567    4630 start.go:296] duration metric: took 46.1105ms for postStartSetup
	I0803 18:03:49.575580    4630 fix.go:56] duration metric: took 21.477822875s for fixHost
	I0803 18:03:49.575613    4630 main.go:141] libmachine: Using SSH client type: native
	I0803 18:03:49.575721    4630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10060aa10] 0x10060d270 <nil>  [] 0s} localhost 50464 <nil> <nil>}
	I0803 18:03:49.575727    4630 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0803 18:03:49.636096    4630 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722733429.723419421
	
	I0803 18:03:49.636106    4630 fix.go:216] guest clock: 1722733429.723419421
	I0803 18:03:49.636110    4630 fix.go:229] Guest: 2024-08-03 18:03:49.723419421 -0700 PDT Remote: 2024-08-03 18:03:49.575581 -0700 PDT m=+21.594381418 (delta=147.838421ms)
	I0803 18:03:49.636121    4630 fix.go:200] guest clock delta is within tolerance: 147.838421ms
	I0803 18:03:49.636124    4630 start.go:83] releasing machines lock for "stopped-upgrade-413000", held for 21.538376667s
	I0803 18:03:49.636190    4630 ssh_runner.go:195] Run: cat /version.json
	I0803 18:03:49.636199    4630 sshutil.go:53] new ssh client: &{IP:localhost Port:50464 SSHKeyPath:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/stopped-upgrade-413000/id_rsa Username:docker}
	I0803 18:03:49.636367    4630 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0803 18:03:49.636384    4630 sshutil.go:53] new ssh client: &{IP:localhost Port:50464 SSHKeyPath:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/stopped-upgrade-413000/id_rsa Username:docker}
	W0803 18:03:49.636829    4630 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50464: connect: connection refused
	I0803 18:03:49.636850    4630 retry.go:31] will retry after 177.559602ms: dial tcp [::1]:50464: connect: connection refused
	W0803 18:03:49.668680    4630 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0803 18:03:49.668734    4630 ssh_runner.go:195] Run: systemctl --version
	I0803 18:03:49.670579    4630 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0803 18:03:49.672149    4630 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0803 18:03:49.672179    4630 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0803 18:03:49.674969    4630 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0803 18:03:49.679212    4630 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0803 18:03:49.679221    4630 start.go:495] detecting cgroup driver to use...
	I0803 18:03:49.679300    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0803 18:03:49.686252    4630 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0803 18:03:49.689125    4630 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0803 18:03:49.692300    4630 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0803 18:03:49.692329    4630 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0803 18:03:49.695565    4630 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0803 18:03:49.698437    4630 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0803 18:03:49.701225    4630 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0803 18:03:49.704511    4630 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0803 18:03:49.707576    4630 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0803 18:03:49.710418    4630 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0803 18:03:49.713336    4630 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0803 18:03:49.716358    4630 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0803 18:03:49.719280    4630 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0803 18:03:49.721726    4630 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 18:03:49.802614    4630 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0803 18:03:49.810454    4630 start.go:495] detecting cgroup driver to use...
	I0803 18:03:49.810522    4630 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0803 18:03:49.815314    4630 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0803 18:03:49.825657    4630 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0803 18:03:49.831533    4630 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0803 18:03:49.835989    4630 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0803 18:03:49.840203    4630 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0803 18:03:49.892300    4630 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0803 18:03:49.897607    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0803 18:03:49.904063    4630 ssh_runner.go:195] Run: which cri-dockerd
	I0803 18:03:49.905304    4630 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0803 18:03:49.908377    4630 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0803 18:03:49.913377    4630 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0803 18:03:49.991123    4630 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0803 18:03:50.070278    4630 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0803 18:03:50.070350    4630 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0803 18:03:50.075756    4630 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 18:03:50.149438    4630 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0803 18:03:51.306596    4630 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.157175708s)
	I0803 18:03:51.306651    4630 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0803 18:03:51.311045    4630 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0803 18:03:51.316832    4630 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0803 18:03:51.386686    4630 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0803 18:03:51.466215    4630 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 18:03:51.547718    4630 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0803 18:03:51.553974    4630 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0803 18:03:51.558635    4630 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 18:03:51.639387    4630 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0803 18:03:51.677609    4630 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0803 18:03:51.677681    4630 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0803 18:03:51.680688    4630 start.go:563] Will wait 60s for crictl version
	I0803 18:03:51.680735    4630 ssh_runner.go:195] Run: which crictl
	I0803 18:03:51.682357    4630 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0803 18:03:51.696622    4630 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0803 18:03:51.696688    4630 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0803 18:03:51.712273    4630 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0803 18:03:51.733014    4630 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0803 18:03:51.733136    4630 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0803 18:03:51.734499    4630 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0803 18:03:51.737946    4630 kubeadm.go:883] updating cluster {Name:stopped-upgrade-413000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50497 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-413000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0803 18:03:51.737987    4630 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0803 18:03:51.738038    4630 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0803 18:03:51.748555    4630 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0803 18:03:51.748564    4630 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0803 18:03:51.748609    4630 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0803 18:03:51.752266    4630 ssh_runner.go:195] Run: which lz4
	I0803 18:03:51.753757    4630 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0803 18:03:51.754975    4630 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0803 18:03:51.754985    4630 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0803 18:03:52.655291    4630 docker.go:649] duration metric: took 901.592792ms to copy over tarball
	I0803 18:03:52.655350    4630 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0803 18:03:51.950997    4477 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:03:53.811514    4630 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.156183667s)
	I0803 18:03:53.811527    4630 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0803 18:03:53.827965    4630 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0803 18:03:53.831642    4630 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0803 18:03:53.836702    4630 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 18:03:53.914534    4630 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0803 18:03:55.403483    4630 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.48897675s)
	I0803 18:03:55.403584    4630 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0803 18:03:55.414350    4630 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0803 18:03:55.414362    4630 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0803 18:03:55.414368    4630 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0803 18:03:55.418677    4630 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0803 18:03:55.420396    4630 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0803 18:03:55.422492    4630 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0803 18:03:55.422630    4630 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0803 18:03:55.425471    4630 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0803 18:03:55.425698    4630 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0803 18:03:55.427158    4630 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0803 18:03:55.427275    4630 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0803 18:03:55.429012    4630 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0803 18:03:55.429012    4630 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0803 18:03:55.430188    4630 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0803 18:03:55.430281    4630 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0803 18:03:55.431296    4630 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0803 18:03:55.431391    4630 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0803 18:03:55.440342    4630 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0803 18:03:55.441425    4630 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0803 18:03:55.835161    4630 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0803 18:03:55.847213    4630 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0803 18:03:55.847235    4630 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0803 18:03:55.847297    4630 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0803 18:03:55.857291    4630 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0803 18:03:55.857376    4630 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0803 18:03:55.865291    4630 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0803 18:03:55.867788    4630 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0803 18:03:55.867807    4630 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0803 18:03:55.867847    4630 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0803 18:03:55.877506    4630 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0803 18:03:55.877531    4630 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0803 18:03:55.877583    4630 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0803 18:03:55.879750    4630 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0803 18:03:55.887718    4630 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0803 18:03:55.892314    4630 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	W0803 18:03:55.903095    4630 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0803 18:03:55.903239    4630 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0803 18:03:55.903784    4630 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0803 18:03:55.903810    4630 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0803 18:03:55.903836    4630 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0803 18:03:55.908601    4630 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0803 18:03:55.914791    4630 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0803 18:03:55.914815    4630 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0803 18:03:55.914865    4630 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0803 18:03:55.925374    4630 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0803 18:03:55.925497    4630 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0803 18:03:55.930570    4630 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0803 18:03:55.930590    4630 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0803 18:03:55.930639    4630 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0803 18:03:55.935452    4630 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0803 18:03:55.935570    4630 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0803 18:03:55.935574    4630 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0803 18:03:55.935590    4630 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0803 18:03:55.939896    4630 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0803 18:03:55.944607    4630 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0803 18:03:55.944626    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0803 18:03:55.947570    4630 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0803 18:03:55.947576    4630 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0803 18:03:55.947600    4630 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0803 18:03:55.978055    4630 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0803 18:03:55.978080    4630 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0803 18:03:55.978135    4630 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0803 18:03:56.016488    4630 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0803 18:03:56.016506    4630 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0803 18:03:56.016516    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0803 18:03:56.016543    4630 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0803 18:03:56.016647    4630 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	W0803 18:03:56.043667    4630 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0803 18:03:56.043780    4630 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0803 18:03:56.066294    4630 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0803 18:03:56.066338    4630 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0803 18:03:56.066364    4630 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0803 18:03:56.066382    4630 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0803 18:03:56.066404    4630 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0803 18:03:56.066448    4630 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0803 18:03:56.101893    4630 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0803 18:03:56.102031    4630 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0803 18:03:56.115081    4630 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0803 18:03:56.115119    4630 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0803 18:03:56.178518    4630 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0803 18:03:56.178534    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0803 18:03:56.525029    4630 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0803 18:03:56.525050    4630 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0803 18:03:56.525056    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0803 18:03:56.647005    4630 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0803 18:03:56.647047    4630 cache_images.go:92] duration metric: took 1.232707917s to LoadCachedImages
	W0803 18:03:56.647087    4630 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0803 18:03:56.647094    4630 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0803 18:03:56.647149    4630 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-413000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-413000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0803 18:03:56.647211    4630 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0803 18:03:56.660880    4630 cni.go:84] Creating CNI manager for ""
	I0803 18:03:56.660893    4630 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0803 18:03:56.660899    4630 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0803 18:03:56.660908    4630 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-413000 NodeName:stopped-upgrade-413000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0803 18:03:56.660980    4630 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-413000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0803 18:03:56.661037    4630 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0803 18:03:56.664476    4630 binaries.go:44] Found k8s binaries, skipping transfer
	I0803 18:03:56.664506    4630 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0803 18:03:56.667644    4630 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0803 18:03:56.672720    4630 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0803 18:03:56.677837    4630 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0803 18:03:56.683280    4630 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0803 18:03:56.684638    4630 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0803 18:03:56.688594    4630 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 18:03:56.775224    4630 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0803 18:03:56.780890    4630 certs.go:68] Setting up /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/stopped-upgrade-413000 for IP: 10.0.2.15
	I0803 18:03:56.780896    4630 certs.go:194] generating shared ca certs ...
	I0803 18:03:56.780905    4630 certs.go:226] acquiring lock for ca certs: {Name:mk4c6ee72dd2b768bec67e582e0b6b1af1b504e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 18:03:56.781068    4630 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19364-1166/.minikube/ca.key
	I0803 18:03:56.781125    4630 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19364-1166/.minikube/proxy-client-ca.key
	I0803 18:03:56.781130    4630 certs.go:256] generating profile certs ...
	I0803 18:03:56.781219    4630 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/stopped-upgrade-413000/client.key
	I0803 18:03:56.781235    4630 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/stopped-upgrade-413000/apiserver.key.19a19657
	I0803 18:03:56.781246    4630 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/stopped-upgrade-413000/apiserver.crt.19a19657 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0803 18:03:57.052023    4630 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/stopped-upgrade-413000/apiserver.crt.19a19657 ...
	I0803 18:03:57.052040    4630 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/stopped-upgrade-413000/apiserver.crt.19a19657: {Name:mkee3041379328624e4e79a515ed80df02ed59f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 18:03:57.052383    4630 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/stopped-upgrade-413000/apiserver.key.19a19657 ...
	I0803 18:03:57.052389    4630 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/stopped-upgrade-413000/apiserver.key.19a19657: {Name:mk7c5694bb8397d1fed4b6507c5be27e8fbc5792 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 18:03:57.052531    4630 certs.go:381] copying /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/stopped-upgrade-413000/apiserver.crt.19a19657 -> /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/stopped-upgrade-413000/apiserver.crt
	I0803 18:03:57.052689    4630 certs.go:385] copying /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/stopped-upgrade-413000/apiserver.key.19a19657 -> /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/stopped-upgrade-413000/apiserver.key
	I0803 18:03:57.052864    4630 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/stopped-upgrade-413000/proxy-client.key
	I0803 18:03:57.053008    4630 certs.go:484] found cert: /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/1673.pem (1338 bytes)
	W0803 18:03:57.053036    4630 certs.go:480] ignoring /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/1673_empty.pem, impossibly tiny 0 bytes
	I0803 18:03:57.053042    4630 certs.go:484] found cert: /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/ca-key.pem (1675 bytes)
	I0803 18:03:57.053069    4630 certs.go:484] found cert: /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/ca.pem (1082 bytes)
	I0803 18:03:57.053088    4630 certs.go:484] found cert: /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/cert.pem (1123 bytes)
	I0803 18:03:57.053106    4630 certs.go:484] found cert: /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/key.pem (1675 bytes)
	I0803 18:03:57.053144    4630 certs.go:484] found cert: /Users/jenkins/minikube-integration/19364-1166/.minikube/files/etc/ssl/certs/16732.pem (1708 bytes)
	I0803 18:03:57.053477    4630 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1166/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0803 18:03:57.061023    4630 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1166/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0803 18:03:57.068894    4630 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1166/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0803 18:03:57.076474    4630 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1166/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0803 18:03:57.084476    4630 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/stopped-upgrade-413000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0803 18:03:57.092794    4630 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/stopped-upgrade-413000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0803 18:03:57.100390    4630 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/stopped-upgrade-413000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0803 18:03:57.108274    4630 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/stopped-upgrade-413000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0803 18:03:57.116246    4630 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/1673.pem --> /usr/share/ca-certificates/1673.pem (1338 bytes)
	I0803 18:03:57.123977    4630 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1166/.minikube/files/etc/ssl/certs/16732.pem --> /usr/share/ca-certificates/16732.pem (1708 bytes)
	I0803 18:03:57.132337    4630 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1166/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0803 18:03:57.140330    4630 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0803 18:03:57.147878    4630 ssh_runner.go:195] Run: openssl version
	I0803 18:03:57.150325    4630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1673.pem && ln -fs /usr/share/ca-certificates/1673.pem /etc/ssl/certs/1673.pem"
	I0803 18:03:57.153597    4630 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1673.pem
	I0803 18:03:57.155236    4630 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  4 00:28 /usr/share/ca-certificates/1673.pem
	I0803 18:03:57.155264    4630 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1673.pem
	I0803 18:03:57.157345    4630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1673.pem /etc/ssl/certs/51391683.0"
	I0803 18:03:57.160795    4630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16732.pem && ln -fs /usr/share/ca-certificates/16732.pem /etc/ssl/certs/16732.pem"
	I0803 18:03:57.164724    4630 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16732.pem
	I0803 18:03:57.166552    4630 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  4 00:28 /usr/share/ca-certificates/16732.pem
	I0803 18:03:57.166598    4630 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16732.pem
	I0803 18:03:57.168584    4630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16732.pem /etc/ssl/certs/3ec20f2e.0"
	I0803 18:03:57.172476    4630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0803 18:03:57.176191    4630 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0803 18:03:57.177990    4630 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  4 00:21 /usr/share/ca-certificates/minikubeCA.pem
	I0803 18:03:57.178013    4630 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0803 18:03:57.179843    4630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0803 18:03:57.183119    4630 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0803 18:03:57.184664    4630 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0803 18:03:57.187537    4630 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0803 18:03:57.189710    4630 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0803 18:03:57.191751    4630 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0803 18:03:57.193723    4630 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0803 18:03:57.195492    4630 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0803 18:03:57.197473    4630 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-413000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50497 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-413000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0803 18:03:57.197561    4630 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0803 18:03:57.208393    4630 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0803 18:03:57.211895    4630 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0803 18:03:57.211903    4630 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0803 18:03:57.211945    4630 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0803 18:03:57.215440    4630 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0803 18:03:57.215756    4630 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-413000" does not appear in /Users/jenkins/minikube-integration/19364-1166/kubeconfig
	I0803 18:03:57.215859    4630 kubeconfig.go:62] /Users/jenkins/minikube-integration/19364-1166/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-413000" cluster setting kubeconfig missing "stopped-upgrade-413000" context setting]
	I0803 18:03:57.216082    4630 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19364-1166/kubeconfig: {Name:mk0a3c55e1982b2d92db1034b47f8334d27942c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 18:03:57.216522    4630 kapi.go:59] client config for stopped-upgrade-413000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/stopped-upgrade-413000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/stopped-upgrade-413000/client.key", CAFile:"/Users/jenkins/minikube-integration/19364-1166/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1019a01b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0803 18:03:57.216838    4630 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0803 18:03:57.220132    4630 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-413000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0803 18:03:57.220141    4630 kubeadm.go:1160] stopping kube-system containers ...
	I0803 18:03:57.220194    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0803 18:03:57.231543    4630 docker.go:483] Stopping containers: [a41ac171ebac 64391ce8f5a9 db60aaba5af7 eaff7d840b96 d4fb7551ff98 51278babd119 ca2ef152d64a 2fce2c3712d4]
	I0803 18:03:57.231620    4630 ssh_runner.go:195] Run: docker stop a41ac171ebac 64391ce8f5a9 db60aaba5af7 eaff7d840b96 d4fb7551ff98 51278babd119 ca2ef152d64a 2fce2c3712d4
	I0803 18:03:57.243772    4630 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0803 18:03:57.249560    4630 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0803 18:03:57.252852    4630 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0803 18:03:57.252861    4630 kubeadm.go:157] found existing configuration files:
	
	I0803 18:03:57.252889    4630 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50497 /etc/kubernetes/admin.conf
	I0803 18:03:57.256345    4630 kubeadm.go:163] "https://control-plane.minikube.internal:50497" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50497 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0803 18:03:57.256387    4630 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0803 18:03:57.259998    4630 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50497 /etc/kubernetes/kubelet.conf
	I0803 18:03:57.263351    4630 kubeadm.go:163] "https://control-plane.minikube.internal:50497" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50497 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0803 18:03:57.263392    4630 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0803 18:03:57.266312    4630 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50497 /etc/kubernetes/controller-manager.conf
	I0803 18:03:57.268998    4630 kubeadm.go:163] "https://control-plane.minikube.internal:50497" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50497 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0803 18:03:57.269036    4630 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0803 18:03:57.272506    4630 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50497 /etc/kubernetes/scheduler.conf
	I0803 18:03:57.275846    4630 kubeadm.go:163] "https://control-plane.minikube.internal:50497" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50497 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0803 18:03:57.275882    4630 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0803 18:03:57.279295    4630 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0803 18:03:57.282401    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0803 18:03:57.304578    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0803 18:03:57.943283    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0803 18:03:56.953035    4477 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:03:56.953156    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:03:56.964654    4477 logs.go:276] 2 containers: [9e6227842e53 0b82634b0f3a]
	I0803 18:03:56.964717    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:03:56.975524    4477 logs.go:276] 2 containers: [3150a8cd7259 4b4d796858e2]
	I0803 18:03:56.975583    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:03:56.986143    4477 logs.go:276] 1 containers: [837106f253fc]
	I0803 18:03:56.986210    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:03:56.999164    4477 logs.go:276] 2 containers: [910358cdfc3a eb369f588964]
	I0803 18:03:56.999234    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:03:57.010137    4477 logs.go:276] 1 containers: [95d7841031fe]
	I0803 18:03:57.010203    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:03:57.020783    4477 logs.go:276] 2 containers: [2a2fca0a39b4 df62fe6ae6da]
	I0803 18:03:57.020845    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:03:57.031173    4477 logs.go:276] 0 containers: []
	W0803 18:03:57.031189    4477 logs.go:278] No container was found matching "kindnet"
	I0803 18:03:57.031248    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:03:57.041878    4477 logs.go:276] 2 containers: [e2fb415b1036 0913da792538]
	I0803 18:03:57.041893    4477 logs.go:123] Gathering logs for coredns [837106f253fc] ...
	I0803 18:03:57.041899    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 837106f253fc"
	I0803 18:03:57.053337    4477 logs.go:123] Gathering logs for kube-proxy [95d7841031fe] ...
	I0803 18:03:57.053347    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95d7841031fe"
	I0803 18:03:57.065826    4477 logs.go:123] Gathering logs for dmesg ...
	I0803 18:03:57.065837    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:03:57.070472    4477 logs.go:123] Gathering logs for kube-apiserver [9e6227842e53] ...
	I0803 18:03:57.070481    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e6227842e53"
	I0803 18:03:57.091624    4477 logs.go:123] Gathering logs for kube-apiserver [0b82634b0f3a] ...
	I0803 18:03:57.091634    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b82634b0f3a"
	I0803 18:03:57.113728    4477 logs.go:123] Gathering logs for etcd [4b4d796858e2] ...
	I0803 18:03:57.113740    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b4d796858e2"
	I0803 18:03:57.129335    4477 logs.go:123] Gathering logs for kube-controller-manager [2a2fca0a39b4] ...
	I0803 18:03:57.129348    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a2fca0a39b4"
	I0803 18:03:57.148130    4477 logs.go:123] Gathering logs for storage-provisioner [e2fb415b1036] ...
	I0803 18:03:57.148139    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2fb415b1036"
	I0803 18:03:57.161245    4477 logs.go:123] Gathering logs for kubelet ...
	I0803 18:03:57.161254    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0803 18:03:57.198810    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:03:57.198908    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:03:57.199508    4477 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:03:57.199514    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:03:57.238260    4477 logs.go:123] Gathering logs for etcd [3150a8cd7259] ...
	I0803 18:03:57.238272    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3150a8cd7259"
	I0803 18:03:57.258075    4477 logs.go:123] Gathering logs for kube-scheduler [eb369f588964] ...
	I0803 18:03:57.258083    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb369f588964"
	I0803 18:03:57.274510    4477 logs.go:123] Gathering logs for Docker ...
	I0803 18:03:57.274519    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:03:57.301024    4477 logs.go:123] Gathering logs for container status ...
	I0803 18:03:57.301047    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:03:57.313970    4477 logs.go:123] Gathering logs for kube-scheduler [910358cdfc3a] ...
	I0803 18:03:57.313986    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 910358cdfc3a"
	I0803 18:03:57.326443    4477 logs.go:123] Gathering logs for kube-controller-manager [df62fe6ae6da] ...
	I0803 18:03:57.326456    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df62fe6ae6da"
	I0803 18:03:57.338341    4477 logs.go:123] Gathering logs for storage-provisioner [0913da792538] ...
	I0803 18:03:57.338352    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0913da792538"
	I0803 18:03:57.350734    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:03:57.350746    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0803 18:03:57.350775    4477 out.go:239] X Problems detected in kubelet:
	W0803 18:03:57.350781    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:03:57.350785    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:03:57.350789    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:03:57.350792    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:03:58.069700    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0803 18:03:58.095976    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0803 18:03:58.118053    4630 api_server.go:52] waiting for apiserver process to appear ...
	I0803 18:03:58.118134    4630 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 18:03:58.620209    4630 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 18:03:59.120203    4630 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 18:03:59.124523    4630 api_server.go:72] duration metric: took 1.006495708s to wait for apiserver process to appear ...
	I0803 18:03:59.124537    4630 api_server.go:88] waiting for apiserver healthz status ...
	I0803 18:03:59.124546    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:04:04.126550    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:04:04.126589    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:04:07.354630    4477 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:04:09.126818    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:04:09.126860    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:04:12.356725    4477 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:04:12.356870    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:04:12.370835    4477 logs.go:276] 2 containers: [9e6227842e53 0b82634b0f3a]
	I0803 18:04:12.370919    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:04:12.382119    4477 logs.go:276] 2 containers: [3150a8cd7259 4b4d796858e2]
	I0803 18:04:12.382191    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:04:12.392394    4477 logs.go:276] 1 containers: [837106f253fc]
	I0803 18:04:12.392463    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:04:12.407931    4477 logs.go:276] 2 containers: [910358cdfc3a eb369f588964]
	I0803 18:04:12.408004    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:04:12.418801    4477 logs.go:276] 1 containers: [95d7841031fe]
	I0803 18:04:12.418874    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:04:12.429943    4477 logs.go:276] 2 containers: [2a2fca0a39b4 df62fe6ae6da]
	I0803 18:04:12.430017    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:04:12.441729    4477 logs.go:276] 0 containers: []
	W0803 18:04:12.441740    4477 logs.go:278] No container was found matching "kindnet"
	I0803 18:04:12.441796    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:04:12.457614    4477 logs.go:276] 2 containers: [e2fb415b1036 0913da792538]
	I0803 18:04:12.457630    4477 logs.go:123] Gathering logs for dmesg ...
	I0803 18:04:12.457637    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:04:12.461964    4477 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:04:12.461972    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:04:12.496670    4477 logs.go:123] Gathering logs for kube-scheduler [eb369f588964] ...
	I0803 18:04:12.496680    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb369f588964"
	I0803 18:04:12.512300    4477 logs.go:123] Gathering logs for kube-proxy [95d7841031fe] ...
	I0803 18:04:12.512312    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95d7841031fe"
	I0803 18:04:12.524550    4477 logs.go:123] Gathering logs for kubelet ...
	I0803 18:04:12.524560    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0803 18:04:12.559238    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:04:12.559330    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:04:12.559926    4477 logs.go:123] Gathering logs for coredns [837106f253fc] ...
	I0803 18:04:12.559930    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 837106f253fc"
	I0803 18:04:12.571056    4477 logs.go:123] Gathering logs for kube-scheduler [910358cdfc3a] ...
	I0803 18:04:12.571068    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 910358cdfc3a"
	I0803 18:04:12.582804    4477 logs.go:123] Gathering logs for kube-controller-manager [2a2fca0a39b4] ...
	I0803 18:04:12.582816    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a2fca0a39b4"
	I0803 18:04:12.600055    4477 logs.go:123] Gathering logs for storage-provisioner [0913da792538] ...
	I0803 18:04:12.600066    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0913da792538"
	I0803 18:04:12.612261    4477 logs.go:123] Gathering logs for container status ...
	I0803 18:04:12.612272    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:04:12.624293    4477 logs.go:123] Gathering logs for kube-apiserver [9e6227842e53] ...
	I0803 18:04:12.624304    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e6227842e53"
	I0803 18:04:12.638708    4477 logs.go:123] Gathering logs for kube-apiserver [0b82634b0f3a] ...
	I0803 18:04:12.638719    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b82634b0f3a"
	I0803 18:04:12.665785    4477 logs.go:123] Gathering logs for etcd [3150a8cd7259] ...
	I0803 18:04:12.665796    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3150a8cd7259"
	I0803 18:04:12.680137    4477 logs.go:123] Gathering logs for Docker ...
	I0803 18:04:12.680148    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:04:12.702475    4477 logs.go:123] Gathering logs for etcd [4b4d796858e2] ...
	I0803 18:04:12.702481    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b4d796858e2"
	I0803 18:04:12.718910    4477 logs.go:123] Gathering logs for kube-controller-manager [df62fe6ae6da] ...
	I0803 18:04:12.718921    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df62fe6ae6da"
	I0803 18:04:12.730539    4477 logs.go:123] Gathering logs for storage-provisioner [e2fb415b1036] ...
	I0803 18:04:12.730549    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2fb415b1036"
	I0803 18:04:12.742662    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:04:12.742672    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0803 18:04:12.742701    4477 out.go:239] X Problems detected in kubelet:
	W0803 18:04:12.742706    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:04:12.742710    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:04:12.742715    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:04:12.742717    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:04:14.127211    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:04:14.127248    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:04:19.127727    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:04:19.127780    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:04:22.746577    4477 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:04:24.128504    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:04:24.128545    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:04:27.748788    4477 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:04:27.748850    4477 kubeadm.go:597] duration metric: took 4m7.387518375s to restartPrimaryControlPlane
	W0803 18:04:27.748931    4477 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0803 18:04:27.748956    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0803 18:04:28.746050    4477 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 18:04:28.750863    4477 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0803 18:04:28.753643    4477 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0803 18:04:28.756339    4477 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0803 18:04:28.756346    4477 kubeadm.go:157] found existing configuration files:
	
	I0803 18:04:28.756369    4477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50291 /etc/kubernetes/admin.conf
	I0803 18:04:28.759319    4477 kubeadm.go:163] "https://control-plane.minikube.internal:50291" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50291 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0803 18:04:28.759346    4477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0803 18:04:28.762043    4477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50291 /etc/kubernetes/kubelet.conf
	I0803 18:04:28.764618    4477 kubeadm.go:163] "https://control-plane.minikube.internal:50291" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50291 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0803 18:04:28.764641    4477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0803 18:04:28.767455    4477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50291 /etc/kubernetes/controller-manager.conf
	I0803 18:04:28.770018    4477 kubeadm.go:163] "https://control-plane.minikube.internal:50291" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50291 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0803 18:04:28.770040    4477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0803 18:04:28.772577    4477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50291 /etc/kubernetes/scheduler.conf
	I0803 18:04:28.775535    4477 kubeadm.go:163] "https://control-plane.minikube.internal:50291" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50291 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0803 18:04:28.775556    4477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0803 18:04:28.778106    4477 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0803 18:04:28.795271    4477 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0803 18:04:28.795301    4477 kubeadm.go:310] [preflight] Running pre-flight checks
	I0803 18:04:28.850440    4477 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0803 18:04:28.850499    4477 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0803 18:04:28.850548    4477 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0803 18:04:28.899476    4477 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0803 18:04:28.903632    4477 out.go:204]   - Generating certificates and keys ...
	I0803 18:04:28.903663    4477 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0803 18:04:28.903700    4477 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0803 18:04:28.903744    4477 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0803 18:04:28.903777    4477 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0803 18:04:28.903815    4477 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0803 18:04:28.903842    4477 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0803 18:04:28.903874    4477 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0803 18:04:28.903902    4477 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0803 18:04:28.903939    4477 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0803 18:04:28.903986    4477 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0803 18:04:28.904005    4477 kubeadm.go:310] [certs] Using the existing "sa" key
	I0803 18:04:28.904035    4477 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0803 18:04:29.041141    4477 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0803 18:04:29.151241    4477 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0803 18:04:29.226313    4477 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0803 18:04:29.269217    4477 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0803 18:04:29.298616    4477 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0803 18:04:29.298960    4477 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0803 18:04:29.299076    4477 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0803 18:04:29.387750    4477 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0803 18:04:29.390546    4477 out.go:204]   - Booting up control plane ...
	I0803 18:04:29.390593    4477 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0803 18:04:29.390633    4477 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0803 18:04:29.390666    4477 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0803 18:04:29.390715    4477 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0803 18:04:29.390804    4477 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0803 18:04:29.129411    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:04:29.129432    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:04:33.390097    4477 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.003797 seconds
	I0803 18:04:33.390176    4477 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0803 18:04:33.394324    4477 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0803 18:04:33.907400    4477 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0803 18:04:33.907628    4477 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-359000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0803 18:04:34.411289    4477 kubeadm.go:310] [bootstrap-token] Using token: hwjfj5.fjkrvrvpyv2v02j4
	I0803 18:04:34.417538    4477 out.go:204]   - Configuring RBAC rules ...
	I0803 18:04:34.417609    4477 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0803 18:04:34.417651    4477 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0803 18:04:34.419272    4477 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0803 18:04:34.421318    4477 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0803 18:04:34.422134    4477 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0803 18:04:34.423265    4477 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0803 18:04:34.427665    4477 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0803 18:04:34.595100    4477 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0803 18:04:34.815345    4477 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0803 18:04:34.815859    4477 kubeadm.go:310] 
	I0803 18:04:34.815892    4477 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0803 18:04:34.815895    4477 kubeadm.go:310] 
	I0803 18:04:34.815930    4477 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0803 18:04:34.815933    4477 kubeadm.go:310] 
	I0803 18:04:34.815945    4477 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0803 18:04:34.815978    4477 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0803 18:04:34.816002    4477 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0803 18:04:34.816022    4477 kubeadm.go:310] 
	I0803 18:04:34.816053    4477 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0803 18:04:34.816056    4477 kubeadm.go:310] 
	I0803 18:04:34.816081    4477 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0803 18:04:34.816085    4477 kubeadm.go:310] 
	I0803 18:04:34.816124    4477 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0803 18:04:34.816166    4477 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0803 18:04:34.816210    4477 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0803 18:04:34.816213    4477 kubeadm.go:310] 
	I0803 18:04:34.816263    4477 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0803 18:04:34.816312    4477 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0803 18:04:34.816318    4477 kubeadm.go:310] 
	I0803 18:04:34.816363    4477 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token hwjfj5.fjkrvrvpyv2v02j4 \
	I0803 18:04:34.816420    4477 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:8926886cd496fcdb8fb5b92a5ce19b9a5533dd397e42f479b7664c72b739cada \
	I0803 18:04:34.816434    4477 kubeadm.go:310] 	--control-plane 
	I0803 18:04:34.816438    4477 kubeadm.go:310] 
	I0803 18:04:34.816481    4477 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0803 18:04:34.816485    4477 kubeadm.go:310] 
	I0803 18:04:34.816527    4477 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token hwjfj5.fjkrvrvpyv2v02j4 \
	I0803 18:04:34.816587    4477 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:8926886cd496fcdb8fb5b92a5ce19b9a5533dd397e42f479b7664c72b739cada 
	I0803 18:04:34.816661    4477 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0803 18:04:34.816668    4477 cni.go:84] Creating CNI manager for ""
	I0803 18:04:34.816675    4477 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0803 18:04:34.820381    4477 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0803 18:04:34.827425    4477 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0803 18:04:34.830313    4477 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0803 18:04:34.834970    4477 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0803 18:04:34.835010    4477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 18:04:34.835028    4477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-359000 minikube.k8s.io/updated_at=2024_08_03T18_04_34_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=40edfa2c305b426121bc436e9f593c10662235e6 minikube.k8s.io/name=running-upgrade-359000 minikube.k8s.io/primary=true
	I0803 18:04:34.876009    4477 kubeadm.go:1113] duration metric: took 41.030833ms to wait for elevateKubeSystemPrivileges
	I0803 18:04:34.876019    4477 ops.go:34] apiserver oom_adj: -16
	I0803 18:04:34.876101    4477 kubeadm.go:394] duration metric: took 4m14.529663667s to StartCluster
	I0803 18:04:34.876112    4477 settings.go:142] acquiring lock: {Name:mkc455f89a0a1d96857baea22a1ca4141ab02c5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 18:04:34.876201    4477 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19364-1166/kubeconfig
	I0803 18:04:34.876564    4477 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19364-1166/kubeconfig: {Name:mk0a3c55e1982b2d92db1034b47f8334d27942c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 18:04:34.876784    4477 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 18:04:34.876840    4477 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0803 18:04:34.876877    4477 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-359000"
	I0803 18:04:34.876889    4477 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-359000"
	W0803 18:04:34.876892    4477 addons.go:243] addon storage-provisioner should already be in state true
	I0803 18:04:34.876931    4477 host.go:66] Checking if "running-upgrade-359000" exists ...
	I0803 18:04:34.876933    4477 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-359000"
	I0803 18:04:34.876945    4477 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-359000"
	I0803 18:04:34.876904    4477 config.go:182] Loaded profile config "running-upgrade-359000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0803 18:04:34.877853    4477 kapi.go:59] client config for running-upgrade-359000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/running-upgrade-359000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/running-upgrade-359000/client.key", CAFile:"/Users/jenkins/minikube-integration/19364-1166/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1045ac1b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0803 18:04:34.877973    4477 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-359000"
	W0803 18:04:34.877980    4477 addons.go:243] addon default-storageclass should already be in state true
	I0803 18:04:34.877987    4477 host.go:66] Checking if "running-upgrade-359000" exists ...
	I0803 18:04:34.880305    4477 out.go:177] * Verifying Kubernetes components...
	I0803 18:04:34.880643    4477 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0803 18:04:34.884513    4477 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0803 18:04:34.884521    4477 sshutil.go:53] new ssh client: &{IP:localhost Port:50259 SSHKeyPath:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/running-upgrade-359000/id_rsa Username:docker}
	I0803 18:04:34.888162    4477 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0803 18:04:34.892316    4477 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 18:04:34.896382    4477 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0803 18:04:34.896388    4477 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0803 18:04:34.896395    4477 sshutil.go:53] new ssh client: &{IP:localhost Port:50259 SSHKeyPath:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/running-upgrade-359000/id_rsa Username:docker}
	I0803 18:04:34.966930    4477 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0803 18:04:34.972008    4477 api_server.go:52] waiting for apiserver process to appear ...
	I0803 18:04:34.972045    4477 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 18:04:34.975761    4477 api_server.go:72] duration metric: took 98.968083ms to wait for apiserver process to appear ...
	I0803 18:04:34.975770    4477 api_server.go:88] waiting for apiserver healthz status ...
	I0803 18:04:34.975778    4477 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:04:34.981590    4477 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0803 18:04:34.998159    4477 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0803 18:04:34.130546    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:04:34.130642    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:04:39.977743    4477 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:04:39.977773    4477 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:04:39.132535    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:04:39.132595    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:04:44.977937    4477 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:04:44.977979    4477 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:04:44.134801    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:04:44.134841    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:04:49.978271    4477 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:04:49.978321    4477 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:04:49.136994    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:04:49.137019    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:04:54.978769    4477 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:04:54.978823    4477 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:04:54.139094    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:04:54.139136    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:04:59.979484    4477 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:04:59.979525    4477 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:04:59.141300    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:04:59.141442    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:04:59.155297    4630 logs.go:276] 2 containers: [81ed1708fd6b db60aaba5af7]
	I0803 18:04:59.155381    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:04:59.166874    4630 logs.go:276] 2 containers: [09bbda970489 eaff7d840b96]
	I0803 18:04:59.166946    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:04:59.177366    4630 logs.go:276] 1 containers: [5f215627d79c]
	I0803 18:04:59.177429    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:04:59.187488    4630 logs.go:276] 2 containers: [135bcc7cf850 d4fb7551ff98]
	I0803 18:04:59.187562    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:04:59.198161    4630 logs.go:276] 1 containers: [6d2574ad3d0f]
	I0803 18:04:59.198225    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:04:59.208312    4630 logs.go:276] 2 containers: [718f7dff79a6 a41ac171ebac]
	I0803 18:04:59.208383    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:04:59.221100    4630 logs.go:276] 0 containers: []
	W0803 18:04:59.221110    4630 logs.go:278] No container was found matching "kindnet"
	I0803 18:04:59.221166    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:04:59.231274    4630 logs.go:276] 2 containers: [1431b39e7f18 1c409466f72c]
	I0803 18:04:59.231293    4630 logs.go:123] Gathering logs for kubelet ...
	I0803 18:04:59.231299    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 18:04:59.272448    4630 logs.go:123] Gathering logs for kube-apiserver [81ed1708fd6b] ...
	I0803 18:04:59.272458    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ed1708fd6b"
	I0803 18:04:59.289697    4630 logs.go:123] Gathering logs for kube-apiserver [db60aaba5af7] ...
	I0803 18:04:59.289708    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db60aaba5af7"
	I0803 18:04:59.330997    4630 logs.go:123] Gathering logs for coredns [5f215627d79c] ...
	I0803 18:04:59.331012    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f215627d79c"
	I0803 18:04:59.342458    4630 logs.go:123] Gathering logs for kube-scheduler [135bcc7cf850] ...
	I0803 18:04:59.342467    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 135bcc7cf850"
	I0803 18:04:59.360788    4630 logs.go:123] Gathering logs for storage-provisioner [1c409466f72c] ...
	I0803 18:04:59.360798    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c409466f72c"
	I0803 18:04:59.372294    4630 logs.go:123] Gathering logs for kube-scheduler [d4fb7551ff98] ...
	I0803 18:04:59.372304    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4fb7551ff98"
	I0803 18:04:59.391770    4630 logs.go:123] Gathering logs for Docker ...
	I0803 18:04:59.391781    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:04:59.415692    4630 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:04:59.415698    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:04:59.519994    4630 logs.go:123] Gathering logs for etcd [09bbda970489] ...
	I0803 18:04:59.520009    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09bbda970489"
	I0803 18:04:59.533917    4630 logs.go:123] Gathering logs for etcd [eaff7d840b96] ...
	I0803 18:04:59.533931    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaff7d840b96"
	I0803 18:04:59.549582    4630 logs.go:123] Gathering logs for kube-controller-manager [718f7dff79a6] ...
	I0803 18:04:59.549594    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 718f7dff79a6"
	I0803 18:04:59.567076    4630 logs.go:123] Gathering logs for dmesg ...
	I0803 18:04:59.567086    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:04:59.571335    4630 logs.go:123] Gathering logs for kube-proxy [6d2574ad3d0f] ...
	I0803 18:04:59.571345    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2574ad3d0f"
	I0803 18:04:59.587193    4630 logs.go:123] Gathering logs for kube-controller-manager [a41ac171ebac] ...
	I0803 18:04:59.587203    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41ac171ebac"
	I0803 18:04:59.600067    4630 logs.go:123] Gathering logs for storage-provisioner [1431b39e7f18] ...
	I0803 18:04:59.600078    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1431b39e7f18"
	I0803 18:04:59.621515    4630 logs.go:123] Gathering logs for container status ...
	I0803 18:04:59.621525    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:05:02.135416    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:05:04.980351    4477 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:05:04.980394    4477 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0803 18:05:05.326006    4477 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0803 18:05:05.329255    4477 out.go:177] * Enabled addons: storage-provisioner
	I0803 18:05:05.337199    4477 addons.go:510] duration metric: took 30.461265666s for enable addons: enabled=[storage-provisioner]
	I0803 18:05:07.136508    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:05:07.136654    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:05:07.149019    4630 logs.go:276] 2 containers: [81ed1708fd6b db60aaba5af7]
	I0803 18:05:07.149100    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:05:07.160582    4630 logs.go:276] 2 containers: [09bbda970489 eaff7d840b96]
	I0803 18:05:07.160654    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:05:07.171150    4630 logs.go:276] 1 containers: [5f215627d79c]
	I0803 18:05:07.171219    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:05:07.183768    4630 logs.go:276] 2 containers: [135bcc7cf850 d4fb7551ff98]
	I0803 18:05:07.183842    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:05:07.194761    4630 logs.go:276] 1 containers: [6d2574ad3d0f]
	I0803 18:05:07.194830    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:05:07.209739    4630 logs.go:276] 2 containers: [718f7dff79a6 a41ac171ebac]
	I0803 18:05:07.209801    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:05:07.219998    4630 logs.go:276] 0 containers: []
	W0803 18:05:07.220009    4630 logs.go:278] No container was found matching "kindnet"
	I0803 18:05:07.220070    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:05:07.230725    4630 logs.go:276] 2 containers: [1431b39e7f18 1c409466f72c]
	I0803 18:05:07.230740    4630 logs.go:123] Gathering logs for kube-proxy [6d2574ad3d0f] ...
	I0803 18:05:07.230745    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2574ad3d0f"
	I0803 18:05:07.242586    4630 logs.go:123] Gathering logs for kube-scheduler [d4fb7551ff98] ...
	I0803 18:05:07.242599    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4fb7551ff98"
	I0803 18:05:07.257186    4630 logs.go:123] Gathering logs for kube-controller-manager [718f7dff79a6] ...
	I0803 18:05:07.257195    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 718f7dff79a6"
	I0803 18:05:07.274679    4630 logs.go:123] Gathering logs for Docker ...
	I0803 18:05:07.274692    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:05:07.299972    4630 logs.go:123] Gathering logs for container status ...
	I0803 18:05:07.299979    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:05:07.311791    4630 logs.go:123] Gathering logs for kube-apiserver [81ed1708fd6b] ...
	I0803 18:05:07.311805    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ed1708fd6b"
	I0803 18:05:07.327095    4630 logs.go:123] Gathering logs for etcd [09bbda970489] ...
	I0803 18:05:07.327106    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09bbda970489"
	I0803 18:05:07.341720    4630 logs.go:123] Gathering logs for kube-scheduler [135bcc7cf850] ...
	I0803 18:05:07.341731    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 135bcc7cf850"
	I0803 18:05:07.355269    4630 logs.go:123] Gathering logs for etcd [eaff7d840b96] ...
	I0803 18:05:07.355280    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaff7d840b96"
	I0803 18:05:07.370070    4630 logs.go:123] Gathering logs for kube-controller-manager [a41ac171ebac] ...
	I0803 18:05:07.370079    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41ac171ebac"
	I0803 18:05:07.383760    4630 logs.go:123] Gathering logs for storage-provisioner [1431b39e7f18] ...
	I0803 18:05:07.383774    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1431b39e7f18"
	I0803 18:05:07.395271    4630 logs.go:123] Gathering logs for kube-apiserver [db60aaba5af7] ...
	I0803 18:05:07.395281    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db60aaba5af7"
	I0803 18:05:07.432293    4630 logs.go:123] Gathering logs for coredns [5f215627d79c] ...
	I0803 18:05:07.432303    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f215627d79c"
	I0803 18:05:07.443401    4630 logs.go:123] Gathering logs for storage-provisioner [1c409466f72c] ...
	I0803 18:05:07.443412    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c409466f72c"
	I0803 18:05:07.454335    4630 logs.go:123] Gathering logs for kubelet ...
	I0803 18:05:07.454350    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 18:05:07.490948    4630 logs.go:123] Gathering logs for dmesg ...
	I0803 18:05:07.490956    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:05:07.495038    4630 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:05:07.495046    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:05:09.981462    4477 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:05:09.981505    4477 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:05:10.035772    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:05:14.982804    4477 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:05:14.982847    4477 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:05:15.037866    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:05:15.038038    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:05:15.053669    4630 logs.go:276] 2 containers: [81ed1708fd6b db60aaba5af7]
	I0803 18:05:15.053754    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:05:15.066372    4630 logs.go:276] 2 containers: [09bbda970489 eaff7d840b96]
	I0803 18:05:15.066448    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:05:15.077259    4630 logs.go:276] 1 containers: [5f215627d79c]
	I0803 18:05:15.077331    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:05:15.087326    4630 logs.go:276] 2 containers: [135bcc7cf850 d4fb7551ff98]
	I0803 18:05:15.087397    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:05:15.097789    4630 logs.go:276] 1 containers: [6d2574ad3d0f]
	I0803 18:05:15.097856    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:05:15.108185    4630 logs.go:276] 2 containers: [718f7dff79a6 a41ac171ebac]
	I0803 18:05:15.108254    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:05:15.118573    4630 logs.go:276] 0 containers: []
	W0803 18:05:15.118584    4630 logs.go:278] No container was found matching "kindnet"
	I0803 18:05:15.118640    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:05:15.129001    4630 logs.go:276] 2 containers: [1431b39e7f18 1c409466f72c]
	I0803 18:05:15.129022    4630 logs.go:123] Gathering logs for kubelet ...
	I0803 18:05:15.129028    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 18:05:15.168352    4630 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:05:15.168364    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:05:15.209955    4630 logs.go:123] Gathering logs for etcd [eaff7d840b96] ...
	I0803 18:05:15.209966    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaff7d840b96"
	I0803 18:05:15.229909    4630 logs.go:123] Gathering logs for kube-scheduler [135bcc7cf850] ...
	I0803 18:05:15.229919    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 135bcc7cf850"
	I0803 18:05:15.243851    4630 logs.go:123] Gathering logs for kube-scheduler [d4fb7551ff98] ...
	I0803 18:05:15.243861    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4fb7551ff98"
	I0803 18:05:15.258588    4630 logs.go:123] Gathering logs for kube-controller-manager [718f7dff79a6] ...
	I0803 18:05:15.258598    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 718f7dff79a6"
	I0803 18:05:15.276362    4630 logs.go:123] Gathering logs for storage-provisioner [1431b39e7f18] ...
	I0803 18:05:15.276371    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1431b39e7f18"
	I0803 18:05:15.290003    4630 logs.go:123] Gathering logs for storage-provisioner [1c409466f72c] ...
	I0803 18:05:15.290017    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c409466f72c"
	I0803 18:05:15.301526    4630 logs.go:123] Gathering logs for dmesg ...
	I0803 18:05:15.301538    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:05:15.305563    4630 logs.go:123] Gathering logs for kube-proxy [6d2574ad3d0f] ...
	I0803 18:05:15.305572    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2574ad3d0f"
	I0803 18:05:15.316792    4630 logs.go:123] Gathering logs for kube-apiserver [81ed1708fd6b] ...
	I0803 18:05:15.316807    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ed1708fd6b"
	I0803 18:05:15.331240    4630 logs.go:123] Gathering logs for etcd [09bbda970489] ...
	I0803 18:05:15.331251    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09bbda970489"
	I0803 18:05:15.345499    4630 logs.go:123] Gathering logs for coredns [5f215627d79c] ...
	I0803 18:05:15.345509    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f215627d79c"
	I0803 18:05:15.356711    4630 logs.go:123] Gathering logs for kube-controller-manager [a41ac171ebac] ...
	I0803 18:05:15.356721    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41ac171ebac"
	I0803 18:05:15.368826    4630 logs.go:123] Gathering logs for kube-apiserver [db60aaba5af7] ...
	I0803 18:05:15.368836    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db60aaba5af7"
	I0803 18:05:15.405194    4630 logs.go:123] Gathering logs for Docker ...
	I0803 18:05:15.405205    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:05:15.428546    4630 logs.go:123] Gathering logs for container status ...
	I0803 18:05:15.428554    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:05:17.949899    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:05:19.984575    4477 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:05:19.984620    4477 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:05:22.952057    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:05:22.952214    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:05:22.968967    4630 logs.go:276] 2 containers: [81ed1708fd6b db60aaba5af7]
	I0803 18:05:22.969043    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:05:22.982021    4630 logs.go:276] 2 containers: [09bbda970489 eaff7d840b96]
	I0803 18:05:22.982090    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:05:22.992266    4630 logs.go:276] 1 containers: [5f215627d79c]
	I0803 18:05:22.992328    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:05:23.002918    4630 logs.go:276] 2 containers: [135bcc7cf850 d4fb7551ff98]
	I0803 18:05:23.002982    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:05:24.986684    4477 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:05:24.986725    4477 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:05:23.013423    4630 logs.go:276] 1 containers: [6d2574ad3d0f]
	I0803 18:05:23.013491    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:05:23.023401    4630 logs.go:276] 2 containers: [718f7dff79a6 a41ac171ebac]
	I0803 18:05:23.023468    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:05:23.033883    4630 logs.go:276] 0 containers: []
	W0803 18:05:23.033894    4630 logs.go:278] No container was found matching "kindnet"
	I0803 18:05:23.033947    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:05:23.044981    4630 logs.go:276] 2 containers: [1431b39e7f18 1c409466f72c]
	I0803 18:05:23.045000    4630 logs.go:123] Gathering logs for kube-apiserver [81ed1708fd6b] ...
	I0803 18:05:23.045010    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ed1708fd6b"
	I0803 18:05:23.059684    4630 logs.go:123] Gathering logs for etcd [09bbda970489] ...
	I0803 18:05:23.059694    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09bbda970489"
	I0803 18:05:23.074083    4630 logs.go:123] Gathering logs for storage-provisioner [1431b39e7f18] ...
	I0803 18:05:23.074093    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1431b39e7f18"
	I0803 18:05:23.085591    4630 logs.go:123] Gathering logs for container status ...
	I0803 18:05:23.085599    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:05:23.101132    4630 logs.go:123] Gathering logs for coredns [5f215627d79c] ...
	I0803 18:05:23.101146    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f215627d79c"
	I0803 18:05:23.112348    4630 logs.go:123] Gathering logs for kube-scheduler [135bcc7cf850] ...
	I0803 18:05:23.112360    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 135bcc7cf850"
	I0803 18:05:23.126544    4630 logs.go:123] Gathering logs for kube-controller-manager [a41ac171ebac] ...
	I0803 18:05:23.126553    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41ac171ebac"
	I0803 18:05:23.138857    4630 logs.go:123] Gathering logs for storage-provisioner [1c409466f72c] ...
	I0803 18:05:23.138868    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c409466f72c"
	I0803 18:05:23.150952    4630 logs.go:123] Gathering logs for kubelet ...
	I0803 18:05:23.150963    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 18:05:23.189754    4630 logs.go:123] Gathering logs for dmesg ...
	I0803 18:05:23.189764    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:05:23.193746    4630 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:05:23.193752    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:05:23.228734    4630 logs.go:123] Gathering logs for etcd [eaff7d840b96] ...
	I0803 18:05:23.228744    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaff7d840b96"
	I0803 18:05:23.243430    4630 logs.go:123] Gathering logs for Docker ...
	I0803 18:05:23.243439    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:05:23.269320    4630 logs.go:123] Gathering logs for kube-scheduler [d4fb7551ff98] ...
	I0803 18:05:23.269327    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4fb7551ff98"
	I0803 18:05:23.285321    4630 logs.go:123] Gathering logs for kube-proxy [6d2574ad3d0f] ...
	I0803 18:05:23.285336    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2574ad3d0f"
	I0803 18:05:23.296800    4630 logs.go:123] Gathering logs for kube-controller-manager [718f7dff79a6] ...
	I0803 18:05:23.296812    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 718f7dff79a6"
	I0803 18:05:23.314834    4630 logs.go:123] Gathering logs for kube-apiserver [db60aaba5af7] ...
	I0803 18:05:23.314845    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db60aaba5af7"
	I0803 18:05:25.854374    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:05:29.987285    4477 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:05:29.987338    4477 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:05:30.856589    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:05:30.856715    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:05:30.874364    4630 logs.go:276] 2 containers: [81ed1708fd6b db60aaba5af7]
	I0803 18:05:30.874457    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:05:30.886121    4630 logs.go:276] 2 containers: [09bbda970489 eaff7d840b96]
	I0803 18:05:30.886188    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:05:30.896931    4630 logs.go:276] 1 containers: [5f215627d79c]
	I0803 18:05:30.897003    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:05:30.907302    4630 logs.go:276] 2 containers: [135bcc7cf850 d4fb7551ff98]
	I0803 18:05:30.907374    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:05:30.917693    4630 logs.go:276] 1 containers: [6d2574ad3d0f]
	I0803 18:05:30.917754    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:05:30.932456    4630 logs.go:276] 2 containers: [718f7dff79a6 a41ac171ebac]
	I0803 18:05:30.932526    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:05:30.942962    4630 logs.go:276] 0 containers: []
	W0803 18:05:30.942973    4630 logs.go:278] No container was found matching "kindnet"
	I0803 18:05:30.943028    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:05:30.952915    4630 logs.go:276] 2 containers: [1431b39e7f18 1c409466f72c]
	I0803 18:05:30.952934    4630 logs.go:123] Gathering logs for dmesg ...
	I0803 18:05:30.952940    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:05:30.957077    4630 logs.go:123] Gathering logs for etcd [eaff7d840b96] ...
	I0803 18:05:30.957084    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaff7d840b96"
	I0803 18:05:30.971232    4630 logs.go:123] Gathering logs for kubelet ...
	I0803 18:05:30.971242    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 18:05:31.009634    4630 logs.go:123] Gathering logs for coredns [5f215627d79c] ...
	I0803 18:05:31.009645    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f215627d79c"
	I0803 18:05:31.021425    4630 logs.go:123] Gathering logs for Docker ...
	I0803 18:05:31.021439    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:05:31.046690    4630 logs.go:123] Gathering logs for kube-scheduler [d4fb7551ff98] ...
	I0803 18:05:31.046704    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4fb7551ff98"
	I0803 18:05:31.061612    4630 logs.go:123] Gathering logs for kube-controller-manager [718f7dff79a6] ...
	I0803 18:05:31.061623    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 718f7dff79a6"
	I0803 18:05:31.078652    4630 logs.go:123] Gathering logs for storage-provisioner [1431b39e7f18] ...
	I0803 18:05:31.078663    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1431b39e7f18"
	I0803 18:05:31.089880    4630 logs.go:123] Gathering logs for storage-provisioner [1c409466f72c] ...
	I0803 18:05:31.089893    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c409466f72c"
	I0803 18:05:31.101085    4630 logs.go:123] Gathering logs for kube-apiserver [81ed1708fd6b] ...
	I0803 18:05:31.101096    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ed1708fd6b"
	I0803 18:05:31.115773    4630 logs.go:123] Gathering logs for etcd [09bbda970489] ...
	I0803 18:05:31.115784    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09bbda970489"
	I0803 18:05:31.130377    4630 logs.go:123] Gathering logs for kube-scheduler [135bcc7cf850] ...
	I0803 18:05:31.130391    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 135bcc7cf850"
	I0803 18:05:31.145833    4630 logs.go:123] Gathering logs for kube-controller-manager [a41ac171ebac] ...
	I0803 18:05:31.145844    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41ac171ebac"
	I0803 18:05:31.159288    4630 logs.go:123] Gathering logs for container status ...
	I0803 18:05:31.159299    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:05:31.171586    4630 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:05:31.171600    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:05:31.209446    4630 logs.go:123] Gathering logs for kube-apiserver [db60aaba5af7] ...
	I0803 18:05:31.209457    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db60aaba5af7"
	I0803 18:05:31.249081    4630 logs.go:123] Gathering logs for kube-proxy [6d2574ad3d0f] ...
	I0803 18:05:31.249092    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2574ad3d0f"
	I0803 18:05:34.989582    4477 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:05:34.989748    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:05:35.001391    4477 logs.go:276] 1 containers: [61b3a63eaddc]
	I0803 18:05:35.001463    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:05:35.011474    4477 logs.go:276] 1 containers: [b561e504a901]
	I0803 18:05:35.011545    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:05:35.021635    4477 logs.go:276] 2 containers: [bcbb40889ca3 3d1437e6d6fc]
	I0803 18:05:35.021701    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:05:35.032158    4477 logs.go:276] 1 containers: [1d5103a7d136]
	I0803 18:05:35.032229    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:05:35.042526    4477 logs.go:276] 1 containers: [a422dda53c75]
	I0803 18:05:35.042595    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:05:35.056461    4477 logs.go:276] 1 containers: [01a42d8e56a8]
	I0803 18:05:35.056530    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:05:35.066666    4477 logs.go:276] 0 containers: []
	W0803 18:05:35.066679    4477 logs.go:278] No container was found matching "kindnet"
	I0803 18:05:35.066735    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:05:35.077759    4477 logs.go:276] 1 containers: [3318cf46c892]
	I0803 18:05:35.077774    4477 logs.go:123] Gathering logs for kube-proxy [a422dda53c75] ...
	I0803 18:05:35.077779    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a422dda53c75"
	I0803 18:05:35.089468    4477 logs.go:123] Gathering logs for kube-controller-manager [01a42d8e56a8] ...
	I0803 18:05:35.089478    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01a42d8e56a8"
	I0803 18:05:35.106871    4477 logs.go:123] Gathering logs for storage-provisioner [3318cf46c892] ...
	I0803 18:05:35.106884    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3318cf46c892"
	I0803 18:05:35.118967    4477 logs.go:123] Gathering logs for kubelet ...
	I0803 18:05:35.118977    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0803 18:05:35.138176    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:05:35.138269    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:05:35.154360    4477 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:05:35.154368    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:05:35.188716    4477 logs.go:123] Gathering logs for etcd [b561e504a901] ...
	I0803 18:05:35.188730    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b561e504a901"
	I0803 18:05:35.203061    4477 logs.go:123] Gathering logs for coredns [3d1437e6d6fc] ...
	I0803 18:05:35.203072    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d1437e6d6fc"
	I0803 18:05:35.214237    4477 logs.go:123] Gathering logs for kube-scheduler [1d5103a7d136] ...
	I0803 18:05:35.214250    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d5103a7d136"
	I0803 18:05:35.237825    4477 logs.go:123] Gathering logs for dmesg ...
	I0803 18:05:35.237839    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:05:35.242237    4477 logs.go:123] Gathering logs for kube-apiserver [61b3a63eaddc] ...
	I0803 18:05:35.242244    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61b3a63eaddc"
	I0803 18:05:35.256883    4477 logs.go:123] Gathering logs for coredns [bcbb40889ca3] ...
	I0803 18:05:35.256894    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcbb40889ca3"
	I0803 18:05:35.268415    4477 logs.go:123] Gathering logs for Docker ...
	I0803 18:05:35.268427    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:05:35.293638    4477 logs.go:123] Gathering logs for container status ...
	I0803 18:05:35.293645    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:05:35.304805    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:05:35.304817    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0803 18:05:35.304845    4477 out.go:239] X Problems detected in kubelet:
	W0803 18:05:35.304849    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:05:35.304861    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:05:35.304866    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:05:35.304868    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:05:33.761978    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:05:38.764292    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0803 18:05:38.764486    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:05:38.786516    4630 logs.go:276] 2 containers: [81ed1708fd6b db60aaba5af7]
	I0803 18:05:38.786634    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:05:38.801631    4630 logs.go:276] 2 containers: [09bbda970489 eaff7d840b96]
	I0803 18:05:38.801710    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:05:38.818728    4630 logs.go:276] 1 containers: [5f215627d79c]
	I0803 18:05:38.818799    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:05:38.829400    4630 logs.go:276] 2 containers: [135bcc7cf850 d4fb7551ff98]
	I0803 18:05:38.829480    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:05:38.844102    4630 logs.go:276] 1 containers: [6d2574ad3d0f]
	I0803 18:05:38.844176    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:05:38.854285    4630 logs.go:276] 2 containers: [718f7dff79a6 a41ac171ebac]
	I0803 18:05:38.854355    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:05:38.864451    4630 logs.go:276] 0 containers: []
	W0803 18:05:38.864464    4630 logs.go:278] No container was found matching "kindnet"
	I0803 18:05:38.864521    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:05:38.875095    4630 logs.go:276] 2 containers: [1431b39e7f18 1c409466f72c]
	I0803 18:05:38.875116    4630 logs.go:123] Gathering logs for kubelet ...
	I0803 18:05:38.875120    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 18:05:38.914219    4630 logs.go:123] Gathering logs for kube-scheduler [135bcc7cf850] ...
	I0803 18:05:38.914225    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 135bcc7cf850"
	I0803 18:05:38.928109    4630 logs.go:123] Gathering logs for kube-proxy [6d2574ad3d0f] ...
	I0803 18:05:38.928119    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2574ad3d0f"
	I0803 18:05:38.939989    4630 logs.go:123] Gathering logs for Docker ...
	I0803 18:05:38.939999    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:05:38.965474    4630 logs.go:123] Gathering logs for container status ...
	I0803 18:05:38.965484    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:05:38.977578    4630 logs.go:123] Gathering logs for dmesg ...
	I0803 18:05:38.977589    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:05:38.981617    4630 logs.go:123] Gathering logs for kube-apiserver [db60aaba5af7] ...
	I0803 18:05:38.981623    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db60aaba5af7"
	I0803 18:05:39.018752    4630 logs.go:123] Gathering logs for etcd [eaff7d840b96] ...
	I0803 18:05:39.018765    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaff7d840b96"
	I0803 18:05:39.033478    4630 logs.go:123] Gathering logs for kube-scheduler [d4fb7551ff98] ...
	I0803 18:05:39.033488    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4fb7551ff98"
	I0803 18:05:39.048155    4630 logs.go:123] Gathering logs for kube-controller-manager [718f7dff79a6] ...
	I0803 18:05:39.048166    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 718f7dff79a6"
	I0803 18:05:39.065802    4630 logs.go:123] Gathering logs for storage-provisioner [1431b39e7f18] ...
	I0803 18:05:39.065813    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1431b39e7f18"
	I0803 18:05:39.077593    4630 logs.go:123] Gathering logs for storage-provisioner [1c409466f72c] ...
	I0803 18:05:39.077603    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c409466f72c"
	I0803 18:05:39.088517    4630 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:05:39.088527    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:05:39.123004    4630 logs.go:123] Gathering logs for etcd [09bbda970489] ...
	I0803 18:05:39.123015    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09bbda970489"
	I0803 18:05:39.141475    4630 logs.go:123] Gathering logs for coredns [5f215627d79c] ...
	I0803 18:05:39.141487    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f215627d79c"
	I0803 18:05:39.153556    4630 logs.go:123] Gathering logs for kube-apiserver [81ed1708fd6b] ...
	I0803 18:05:39.153567    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ed1708fd6b"
	I0803 18:05:39.167956    4630 logs.go:123] Gathering logs for kube-controller-manager [a41ac171ebac] ...
	I0803 18:05:39.167969    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41ac171ebac"
	I0803 18:05:41.680972    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:05:45.308403    4477 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:05:46.683455    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:05:46.683620    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:05:46.699560    4630 logs.go:276] 2 containers: [81ed1708fd6b db60aaba5af7]
	I0803 18:05:46.699649    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:05:46.711683    4630 logs.go:276] 2 containers: [09bbda970489 eaff7d840b96]
	I0803 18:05:46.711759    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:05:46.722572    4630 logs.go:276] 1 containers: [5f215627d79c]
	I0803 18:05:46.722638    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:05:46.733707    4630 logs.go:276] 2 containers: [135bcc7cf850 d4fb7551ff98]
	I0803 18:05:46.733773    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:05:46.744528    4630 logs.go:276] 1 containers: [6d2574ad3d0f]
	I0803 18:05:46.744599    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:05:46.756236    4630 logs.go:276] 2 containers: [718f7dff79a6 a41ac171ebac]
	I0803 18:05:46.756305    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:05:46.766923    4630 logs.go:276] 0 containers: []
	W0803 18:05:46.766936    4630 logs.go:278] No container was found matching "kindnet"
	I0803 18:05:46.766998    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:05:46.778023    4630 logs.go:276] 2 containers: [1431b39e7f18 1c409466f72c]
	I0803 18:05:46.778043    4630 logs.go:123] Gathering logs for etcd [eaff7d840b96] ...
	I0803 18:05:46.778049    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaff7d840b96"
	I0803 18:05:46.792744    4630 logs.go:123] Gathering logs for kube-scheduler [135bcc7cf850] ...
	I0803 18:05:46.792754    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 135bcc7cf850"
	I0803 18:05:46.806444    4630 logs.go:123] Gathering logs for kube-proxy [6d2574ad3d0f] ...
	I0803 18:05:46.806454    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2574ad3d0f"
	I0803 18:05:46.819919    4630 logs.go:123] Gathering logs for kube-controller-manager [a41ac171ebac] ...
	I0803 18:05:46.819929    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41ac171ebac"
	I0803 18:05:46.832828    4630 logs.go:123] Gathering logs for container status ...
	I0803 18:05:46.832843    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:05:46.845282    4630 logs.go:123] Gathering logs for kube-apiserver [db60aaba5af7] ...
	I0803 18:05:46.845292    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db60aaba5af7"
	I0803 18:05:46.889061    4630 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:05:46.889075    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:05:46.927456    4630 logs.go:123] Gathering logs for kube-apiserver [81ed1708fd6b] ...
	I0803 18:05:46.927474    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ed1708fd6b"
	I0803 18:05:46.945151    4630 logs.go:123] Gathering logs for etcd [09bbda970489] ...
	I0803 18:05:46.945163    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09bbda970489"
	I0803 18:05:46.959899    4630 logs.go:123] Gathering logs for kube-scheduler [d4fb7551ff98] ...
	I0803 18:05:46.959912    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4fb7551ff98"
	I0803 18:05:46.975939    4630 logs.go:123] Gathering logs for storage-provisioner [1431b39e7f18] ...
	I0803 18:05:46.975954    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1431b39e7f18"
	I0803 18:05:46.989298    4630 logs.go:123] Gathering logs for kubelet ...
	I0803 18:05:46.989309    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 18:05:47.029862    4630 logs.go:123] Gathering logs for coredns [5f215627d79c] ...
	I0803 18:05:47.029874    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f215627d79c"
	I0803 18:05:47.041313    4630 logs.go:123] Gathering logs for kube-controller-manager [718f7dff79a6] ...
	I0803 18:05:47.041325    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 718f7dff79a6"
	I0803 18:05:47.059547    4630 logs.go:123] Gathering logs for dmesg ...
	I0803 18:05:47.059557    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:05:47.064300    4630 logs.go:123] Gathering logs for Docker ...
	I0803 18:05:47.064307    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:05:47.088081    4630 logs.go:123] Gathering logs for storage-provisioner [1c409466f72c] ...
	I0803 18:05:47.088090    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c409466f72c"
	I0803 18:05:50.309775    4477 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:05:50.309959    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:05:50.321754    4477 logs.go:276] 1 containers: [61b3a63eaddc]
	I0803 18:05:50.321834    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:05:50.332359    4477 logs.go:276] 1 containers: [b561e504a901]
	I0803 18:05:50.332422    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:05:50.342609    4477 logs.go:276] 2 containers: [bcbb40889ca3 3d1437e6d6fc]
	I0803 18:05:50.342667    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:05:50.352809    4477 logs.go:276] 1 containers: [1d5103a7d136]
	I0803 18:05:50.352869    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:05:50.363710    4477 logs.go:276] 1 containers: [a422dda53c75]
	I0803 18:05:50.363770    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:05:50.374460    4477 logs.go:276] 1 containers: [01a42d8e56a8]
	I0803 18:05:50.374537    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:05:50.384303    4477 logs.go:276] 0 containers: []
	W0803 18:05:50.384315    4477 logs.go:278] No container was found matching "kindnet"
	I0803 18:05:50.384373    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:05:50.394712    4477 logs.go:276] 1 containers: [3318cf46c892]
	I0803 18:05:50.394730    4477 logs.go:123] Gathering logs for kubelet ...
	I0803 18:05:50.394737    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0803 18:05:50.414385    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:05:50.414477    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:05:50.430211    4477 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:05:50.430218    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:05:50.464933    4477 logs.go:123] Gathering logs for coredns [3d1437e6d6fc] ...
	I0803 18:05:50.464945    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d1437e6d6fc"
	I0803 18:05:50.477002    4477 logs.go:123] Gathering logs for kube-scheduler [1d5103a7d136] ...
	I0803 18:05:50.477013    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d5103a7d136"
	I0803 18:05:50.499417    4477 logs.go:123] Gathering logs for container status ...
	I0803 18:05:50.499429    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:05:50.511797    4477 logs.go:123] Gathering logs for kube-controller-manager [01a42d8e56a8] ...
	I0803 18:05:50.511807    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01a42d8e56a8"
	I0803 18:05:50.529285    4477 logs.go:123] Gathering logs for storage-provisioner [3318cf46c892] ...
	I0803 18:05:50.529294    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3318cf46c892"
	I0803 18:05:50.540889    4477 logs.go:123] Gathering logs for Docker ...
	I0803 18:05:50.540897    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:05:50.566154    4477 logs.go:123] Gathering logs for dmesg ...
	I0803 18:05:50.566163    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:05:50.570781    4477 logs.go:123] Gathering logs for kube-apiserver [61b3a63eaddc] ...
	I0803 18:05:50.570790    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61b3a63eaddc"
	I0803 18:05:50.585445    4477 logs.go:123] Gathering logs for etcd [b561e504a901] ...
	I0803 18:05:50.585456    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b561e504a901"
	I0803 18:05:50.599506    4477 logs.go:123] Gathering logs for coredns [bcbb40889ca3] ...
	I0803 18:05:50.599520    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcbb40889ca3"
	I0803 18:05:50.611746    4477 logs.go:123] Gathering logs for kube-proxy [a422dda53c75] ...
	I0803 18:05:50.611758    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a422dda53c75"
	I0803 18:05:50.624234    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:05:50.624248    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0803 18:05:50.624273    4477 out.go:239] X Problems detected in kubelet:
	W0803 18:05:50.624277    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:05:50.624280    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:05:50.624285    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:05:50.624288    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:05:49.601442    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:05:54.603692    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:05:54.603912    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:05:54.623153    4630 logs.go:276] 2 containers: [81ed1708fd6b db60aaba5af7]
	I0803 18:05:54.623251    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:05:54.637517    4630 logs.go:276] 2 containers: [09bbda970489 eaff7d840b96]
	I0803 18:05:54.637601    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:05:54.651727    4630 logs.go:276] 1 containers: [5f215627d79c]
	I0803 18:05:54.651805    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:05:54.662833    4630 logs.go:276] 2 containers: [135bcc7cf850 d4fb7551ff98]
	I0803 18:05:54.662909    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:05:54.673388    4630 logs.go:276] 1 containers: [6d2574ad3d0f]
	I0803 18:05:54.673455    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:05:54.684024    4630 logs.go:276] 2 containers: [718f7dff79a6 a41ac171ebac]
	I0803 18:05:54.684093    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:05:54.694946    4630 logs.go:276] 0 containers: []
	W0803 18:05:54.694958    4630 logs.go:278] No container was found matching "kindnet"
	I0803 18:05:54.695018    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:05:54.705633    4630 logs.go:276] 2 containers: [1431b39e7f18 1c409466f72c]
	I0803 18:05:54.705654    4630 logs.go:123] Gathering logs for kubelet ...
	I0803 18:05:54.705660    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 18:05:54.744973    4630 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:05:54.744980    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:05:54.781718    4630 logs.go:123] Gathering logs for kube-scheduler [d4fb7551ff98] ...
	I0803 18:05:54.781734    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4fb7551ff98"
	I0803 18:05:54.796955    4630 logs.go:123] Gathering logs for kube-proxy [6d2574ad3d0f] ...
	I0803 18:05:54.796969    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2574ad3d0f"
	I0803 18:05:54.808912    4630 logs.go:123] Gathering logs for Docker ...
	I0803 18:05:54.808924    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:05:54.833325    4630 logs.go:123] Gathering logs for coredns [5f215627d79c] ...
	I0803 18:05:54.833335    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f215627d79c"
	I0803 18:05:54.845157    4630 logs.go:123] Gathering logs for kube-controller-manager [a41ac171ebac] ...
	I0803 18:05:54.845172    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41ac171ebac"
	I0803 18:05:54.857688    4630 logs.go:123] Gathering logs for storage-provisioner [1431b39e7f18] ...
	I0803 18:05:54.857700    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1431b39e7f18"
	I0803 18:05:54.869451    4630 logs.go:123] Gathering logs for dmesg ...
	I0803 18:05:54.869461    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:05:54.873515    4630 logs.go:123] Gathering logs for kube-apiserver [81ed1708fd6b] ...
	I0803 18:05:54.873525    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ed1708fd6b"
	I0803 18:05:54.887228    4630 logs.go:123] Gathering logs for etcd [09bbda970489] ...
	I0803 18:05:54.887237    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09bbda970489"
	I0803 18:05:54.901863    4630 logs.go:123] Gathering logs for etcd [eaff7d840b96] ...
	I0803 18:05:54.901873    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaff7d840b96"
	I0803 18:05:54.916282    4630 logs.go:123] Gathering logs for kube-controller-manager [718f7dff79a6] ...
	I0803 18:05:54.916293    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 718f7dff79a6"
	I0803 18:05:54.933655    4630 logs.go:123] Gathering logs for kube-apiserver [db60aaba5af7] ...
	I0803 18:05:54.933665    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db60aaba5af7"
	I0803 18:05:54.972013    4630 logs.go:123] Gathering logs for kube-scheduler [135bcc7cf850] ...
	I0803 18:05:54.972023    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 135bcc7cf850"
	I0803 18:05:54.989125    4630 logs.go:123] Gathering logs for storage-provisioner [1c409466f72c] ...
	I0803 18:05:54.989138    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c409466f72c"
	I0803 18:05:55.000783    4630 logs.go:123] Gathering logs for container status ...
	I0803 18:05:55.000798    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:05:57.514510    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:06:00.627831    4477 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:06:02.516696    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:06:02.516818    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:06:02.528864    4630 logs.go:276] 2 containers: [81ed1708fd6b db60aaba5af7]
	I0803 18:06:02.528943    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:06:02.539712    4630 logs.go:276] 2 containers: [09bbda970489 eaff7d840b96]
	I0803 18:06:02.539790    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:06:02.550872    4630 logs.go:276] 1 containers: [5f215627d79c]
	I0803 18:06:02.550942    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:06:02.561767    4630 logs.go:276] 2 containers: [135bcc7cf850 d4fb7551ff98]
	I0803 18:06:02.561838    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:06:02.572235    4630 logs.go:276] 1 containers: [6d2574ad3d0f]
	I0803 18:06:02.572300    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:06:02.583015    4630 logs.go:276] 2 containers: [718f7dff79a6 a41ac171ebac]
	I0803 18:06:02.583086    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:06:02.593246    4630 logs.go:276] 0 containers: []
	W0803 18:06:02.593257    4630 logs.go:278] No container was found matching "kindnet"
	I0803 18:06:02.593311    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:06:02.603919    4630 logs.go:276] 2 containers: [1431b39e7f18 1c409466f72c]
	I0803 18:06:02.603936    4630 logs.go:123] Gathering logs for etcd [09bbda970489] ...
	I0803 18:06:02.603941    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09bbda970489"
	I0803 18:06:02.618279    4630 logs.go:123] Gathering logs for kube-scheduler [d4fb7551ff98] ...
	I0803 18:06:02.618291    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4fb7551ff98"
	I0803 18:06:02.632833    4630 logs.go:123] Gathering logs for kube-controller-manager [718f7dff79a6] ...
	I0803 18:06:02.632842    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 718f7dff79a6"
	I0803 18:06:02.651678    4630 logs.go:123] Gathering logs for container status ...
	I0803 18:06:02.651695    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:06:02.664089    4630 logs.go:123] Gathering logs for kube-proxy [6d2574ad3d0f] ...
	I0803 18:06:02.664102    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2574ad3d0f"
	I0803 18:06:02.676465    4630 logs.go:123] Gathering logs for storage-provisioner [1c409466f72c] ...
	I0803 18:06:02.676479    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c409466f72c"
	I0803 18:06:02.687954    4630 logs.go:123] Gathering logs for dmesg ...
	I0803 18:06:02.687969    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:06:02.692690    4630 logs.go:123] Gathering logs for kube-apiserver [db60aaba5af7] ...
	I0803 18:06:02.692699    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db60aaba5af7"
	I0803 18:06:02.735163    4630 logs.go:123] Gathering logs for kube-scheduler [135bcc7cf850] ...
	I0803 18:06:02.735172    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 135bcc7cf850"
	I0803 18:06:02.749961    4630 logs.go:123] Gathering logs for storage-provisioner [1431b39e7f18] ...
	I0803 18:06:02.749975    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1431b39e7f18"
	I0803 18:06:02.761397    4630 logs.go:123] Gathering logs for Docker ...
	I0803 18:06:02.761413    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:06:02.787178    4630 logs.go:123] Gathering logs for kubelet ...
	I0803 18:06:02.787185    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 18:06:02.826413    4630 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:06:02.826420    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:06:02.861281    4630 logs.go:123] Gathering logs for kube-apiserver [81ed1708fd6b] ...
	I0803 18:06:02.861295    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ed1708fd6b"
	I0803 18:06:02.875611    4630 logs.go:123] Gathering logs for etcd [eaff7d840b96] ...
	I0803 18:06:02.875626    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaff7d840b96"
	I0803 18:06:02.889530    4630 logs.go:123] Gathering logs for coredns [5f215627d79c] ...
	I0803 18:06:02.889540    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f215627d79c"
	I0803 18:06:02.901005    4630 logs.go:123] Gathering logs for kube-controller-manager [a41ac171ebac] ...
	I0803 18:06:02.901015    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41ac171ebac"
	I0803 18:06:05.630101    4477 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:06:05.630529    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:06:05.666736    4477 logs.go:276] 1 containers: [61b3a63eaddc]
	I0803 18:06:05.666875    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:06:05.688075    4477 logs.go:276] 1 containers: [b561e504a901]
	I0803 18:06:05.688161    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:06:05.703102    4477 logs.go:276] 2 containers: [bcbb40889ca3 3d1437e6d6fc]
	I0803 18:06:05.703182    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:06:05.715212    4477 logs.go:276] 1 containers: [1d5103a7d136]
	I0803 18:06:05.715291    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:06:05.726041    4477 logs.go:276] 1 containers: [a422dda53c75]
	I0803 18:06:05.726111    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:06:05.736942    4477 logs.go:276] 1 containers: [01a42d8e56a8]
	I0803 18:06:05.737012    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:06:05.747548    4477 logs.go:276] 0 containers: []
	W0803 18:06:05.747557    4477 logs.go:278] No container was found matching "kindnet"
	I0803 18:06:05.747620    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:06:05.762483    4477 logs.go:276] 1 containers: [3318cf46c892]
	I0803 18:06:05.762498    4477 logs.go:123] Gathering logs for dmesg ...
	I0803 18:06:05.762504    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:06:05.767422    4477 logs.go:123] Gathering logs for etcd [b561e504a901] ...
	I0803 18:06:05.767429    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b561e504a901"
	I0803 18:06:05.781644    4477 logs.go:123] Gathering logs for coredns [bcbb40889ca3] ...
	I0803 18:06:05.781657    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcbb40889ca3"
	I0803 18:06:05.415716    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:06:05.793295    4477 logs.go:123] Gathering logs for container status ...
	I0803 18:06:05.793306    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:06:05.805185    4477 logs.go:123] Gathering logs for storage-provisioner [3318cf46c892] ...
	I0803 18:06:05.805199    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3318cf46c892"
	I0803 18:06:05.817117    4477 logs.go:123] Gathering logs for kubelet ...
	I0803 18:06:05.817127    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0803 18:06:05.836888    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:06:05.836985    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:06:05.852607    4477 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:06:05.852614    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:06:05.887335    4477 logs.go:123] Gathering logs for kube-apiserver [61b3a63eaddc] ...
	I0803 18:06:05.887348    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61b3a63eaddc"
	I0803 18:06:05.901816    4477 logs.go:123] Gathering logs for coredns [3d1437e6d6fc] ...
	I0803 18:06:05.901831    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d1437e6d6fc"
	I0803 18:06:05.913937    4477 logs.go:123] Gathering logs for kube-scheduler [1d5103a7d136] ...
	I0803 18:06:05.913951    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d5103a7d136"
	I0803 18:06:05.929194    4477 logs.go:123] Gathering logs for kube-proxy [a422dda53c75] ...
	I0803 18:06:05.929204    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a422dda53c75"
	I0803 18:06:05.940735    4477 logs.go:123] Gathering logs for kube-controller-manager [01a42d8e56a8] ...
	I0803 18:06:05.940744    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01a42d8e56a8"
	I0803 18:06:05.958921    4477 logs.go:123] Gathering logs for Docker ...
	I0803 18:06:05.958930    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:06:05.982730    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:06:05.982742    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0803 18:06:05.982767    4477 out.go:239] X Problems detected in kubelet:
	W0803 18:06:05.982771    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:06:05.982775    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:06:05.982781    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:06:05.982784    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:06:10.417971    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:06:10.418142    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:06:10.434850    4630 logs.go:276] 2 containers: [81ed1708fd6b db60aaba5af7]
	I0803 18:06:10.434928    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:06:10.446032    4630 logs.go:276] 2 containers: [09bbda970489 eaff7d840b96]
	I0803 18:06:10.446100    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:06:10.456479    4630 logs.go:276] 1 containers: [5f215627d79c]
	I0803 18:06:10.456550    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:06:10.467318    4630 logs.go:276] 2 containers: [135bcc7cf850 d4fb7551ff98]
	I0803 18:06:10.467391    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:06:10.477888    4630 logs.go:276] 1 containers: [6d2574ad3d0f]
	I0803 18:06:10.477953    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:06:10.489377    4630 logs.go:276] 2 containers: [718f7dff79a6 a41ac171ebac]
	I0803 18:06:10.489450    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:06:10.500380    4630 logs.go:276] 0 containers: []
	W0803 18:06:10.500392    4630 logs.go:278] No container was found matching "kindnet"
	I0803 18:06:10.500448    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:06:10.510780    4630 logs.go:276] 2 containers: [1431b39e7f18 1c409466f72c]
	I0803 18:06:10.510797    4630 logs.go:123] Gathering logs for dmesg ...
	I0803 18:06:10.510803    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:06:10.515953    4630 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:06:10.515959    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:06:10.551717    4630 logs.go:123] Gathering logs for kube-apiserver [81ed1708fd6b] ...
	I0803 18:06:10.551728    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ed1708fd6b"
	I0803 18:06:10.566107    4630 logs.go:123] Gathering logs for kube-apiserver [db60aaba5af7] ...
	I0803 18:06:10.566120    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db60aaba5af7"
	I0803 18:06:10.604175    4630 logs.go:123] Gathering logs for storage-provisioner [1431b39e7f18] ...
	I0803 18:06:10.604187    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1431b39e7f18"
	I0803 18:06:10.616658    4630 logs.go:123] Gathering logs for kubelet ...
	I0803 18:06:10.616670    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 18:06:10.654316    4630 logs.go:123] Gathering logs for etcd [eaff7d840b96] ...
	I0803 18:06:10.654327    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaff7d840b96"
	I0803 18:06:10.673033    4630 logs.go:123] Gathering logs for kube-proxy [6d2574ad3d0f] ...
	I0803 18:06:10.673043    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2574ad3d0f"
	I0803 18:06:10.684905    4630 logs.go:123] Gathering logs for storage-provisioner [1c409466f72c] ...
	I0803 18:06:10.684921    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c409466f72c"
	I0803 18:06:10.696347    4630 logs.go:123] Gathering logs for coredns [5f215627d79c] ...
	I0803 18:06:10.696359    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f215627d79c"
	I0803 18:06:10.708106    4630 logs.go:123] Gathering logs for kube-scheduler [135bcc7cf850] ...
	I0803 18:06:10.708116    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 135bcc7cf850"
	I0803 18:06:10.722669    4630 logs.go:123] Gathering logs for kube-scheduler [d4fb7551ff98] ...
	I0803 18:06:10.722681    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4fb7551ff98"
	I0803 18:06:10.736996    4630 logs.go:123] Gathering logs for kube-controller-manager [718f7dff79a6] ...
	I0803 18:06:10.737006    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 718f7dff79a6"
	I0803 18:06:10.761261    4630 logs.go:123] Gathering logs for kube-controller-manager [a41ac171ebac] ...
	I0803 18:06:10.761271    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41ac171ebac"
	I0803 18:06:10.773798    4630 logs.go:123] Gathering logs for Docker ...
	I0803 18:06:10.773808    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:06:10.797058    4630 logs.go:123] Gathering logs for etcd [09bbda970489] ...
	I0803 18:06:10.797066    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09bbda970489"
	I0803 18:06:10.812423    4630 logs.go:123] Gathering logs for container status ...
	I0803 18:06:10.812436    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:06:13.331571    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:06:15.986303    4477 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:06:18.333700    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:06:18.333836    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:06:18.345536    4630 logs.go:276] 2 containers: [81ed1708fd6b db60aaba5af7]
	I0803 18:06:18.345610    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:06:18.356252    4630 logs.go:276] 2 containers: [09bbda970489 eaff7d840b96]
	I0803 18:06:18.356330    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:06:18.366638    4630 logs.go:276] 1 containers: [5f215627d79c]
	I0803 18:06:18.366710    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:06:18.377143    4630 logs.go:276] 2 containers: [135bcc7cf850 d4fb7551ff98]
	I0803 18:06:18.377213    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:06:18.387546    4630 logs.go:276] 1 containers: [6d2574ad3d0f]
	I0803 18:06:18.387620    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:06:18.397967    4630 logs.go:276] 2 containers: [718f7dff79a6 a41ac171ebac]
	I0803 18:06:18.398037    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:06:18.408462    4630 logs.go:276] 0 containers: []
	W0803 18:06:18.408472    4630 logs.go:278] No container was found matching "kindnet"
	I0803 18:06:18.408525    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:06:18.418788    4630 logs.go:276] 2 containers: [1431b39e7f18 1c409466f72c]
	I0803 18:06:18.418806    4630 logs.go:123] Gathering logs for kube-apiserver [db60aaba5af7] ...
	I0803 18:06:18.418810    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db60aaba5af7"
	I0803 18:06:18.456011    4630 logs.go:123] Gathering logs for etcd [09bbda970489] ...
	I0803 18:06:18.456022    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09bbda970489"
	I0803 18:06:18.470295    4630 logs.go:123] Gathering logs for kube-controller-manager [718f7dff79a6] ...
	I0803 18:06:18.470308    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 718f7dff79a6"
	I0803 18:06:18.487293    4630 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:06:18.487303    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:06:18.524623    4630 logs.go:123] Gathering logs for dmesg ...
	I0803 18:06:18.524633    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:06:18.528815    4630 logs.go:123] Gathering logs for kube-apiserver [81ed1708fd6b] ...
	I0803 18:06:18.528822    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ed1708fd6b"
	I0803 18:06:18.543565    4630 logs.go:123] Gathering logs for kube-scheduler [d4fb7551ff98] ...
	I0803 18:06:18.543574    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4fb7551ff98"
	I0803 18:06:18.563527    4630 logs.go:123] Gathering logs for storage-provisioner [1431b39e7f18] ...
	I0803 18:06:18.563539    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1431b39e7f18"
	I0803 18:06:18.575243    4630 logs.go:123] Gathering logs for storage-provisioner [1c409466f72c] ...
	I0803 18:06:18.575253    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c409466f72c"
	I0803 18:06:18.590407    4630 logs.go:123] Gathering logs for container status ...
	I0803 18:06:18.590420    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:06:18.603956    4630 logs.go:123] Gathering logs for kubelet ...
	I0803 18:06:18.603969    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 18:06:18.644415    4630 logs.go:123] Gathering logs for coredns [5f215627d79c] ...
	I0803 18:06:18.644425    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f215627d79c"
	I0803 18:06:18.658056    4630 logs.go:123] Gathering logs for kube-proxy [6d2574ad3d0f] ...
	I0803 18:06:18.658069    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2574ad3d0f"
	I0803 18:06:18.670092    4630 logs.go:123] Gathering logs for Docker ...
	I0803 18:06:18.670104    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:06:18.694138    4630 logs.go:123] Gathering logs for etcd [eaff7d840b96] ...
	I0803 18:06:18.694150    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaff7d840b96"
	I0803 18:06:18.708255    4630 logs.go:123] Gathering logs for kube-controller-manager [a41ac171ebac] ...
	I0803 18:06:18.708265    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41ac171ebac"
	I0803 18:06:18.721388    4630 logs.go:123] Gathering logs for kube-scheduler [135bcc7cf850] ...
	I0803 18:06:18.721400    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 135bcc7cf850"
	I0803 18:06:21.236365    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:06:20.988823    4477 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:06:20.989057    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:06:21.014090    4477 logs.go:276] 1 containers: [61b3a63eaddc]
	I0803 18:06:21.014207    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:06:21.031843    4477 logs.go:276] 1 containers: [b561e504a901]
	I0803 18:06:21.031922    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:06:21.044597    4477 logs.go:276] 2 containers: [bcbb40889ca3 3d1437e6d6fc]
	I0803 18:06:21.044672    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:06:21.055736    4477 logs.go:276] 1 containers: [1d5103a7d136]
	I0803 18:06:21.055804    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:06:21.066314    4477 logs.go:276] 1 containers: [a422dda53c75]
	I0803 18:06:21.066382    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:06:21.076508    4477 logs.go:276] 1 containers: [01a42d8e56a8]
	I0803 18:06:21.076570    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:06:21.086654    4477 logs.go:276] 0 containers: []
	W0803 18:06:21.086673    4477 logs.go:278] No container was found matching "kindnet"
	I0803 18:06:21.086731    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:06:21.096804    4477 logs.go:276] 1 containers: [3318cf46c892]
	I0803 18:06:21.096821    4477 logs.go:123] Gathering logs for kube-apiserver [61b3a63eaddc] ...
	I0803 18:06:21.096826    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61b3a63eaddc"
	I0803 18:06:21.111236    4477 logs.go:123] Gathering logs for etcd [b561e504a901] ...
	I0803 18:06:21.111246    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b561e504a901"
	I0803 18:06:21.125271    4477 logs.go:123] Gathering logs for coredns [3d1437e6d6fc] ...
	I0803 18:06:21.125282    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d1437e6d6fc"
	I0803 18:06:21.136748    4477 logs.go:123] Gathering logs for kube-scheduler [1d5103a7d136] ...
	I0803 18:06:21.136760    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d5103a7d136"
	I0803 18:06:21.153138    4477 logs.go:123] Gathering logs for kube-proxy [a422dda53c75] ...
	I0803 18:06:21.153155    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a422dda53c75"
	I0803 18:06:21.166437    4477 logs.go:123] Gathering logs for kube-controller-manager [01a42d8e56a8] ...
	I0803 18:06:21.166450    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01a42d8e56a8"
	I0803 18:06:21.192954    4477 logs.go:123] Gathering logs for kubelet ...
	I0803 18:06:21.192965    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0803 18:06:21.211506    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:06:21.211605    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:06:21.227425    4477 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:06:21.227431    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:06:21.262436    4477 logs.go:123] Gathering logs for coredns [bcbb40889ca3] ...
	I0803 18:06:21.262447    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcbb40889ca3"
	I0803 18:06:21.274883    4477 logs.go:123] Gathering logs for storage-provisioner [3318cf46c892] ...
	I0803 18:06:21.274893    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3318cf46c892"
	I0803 18:06:21.287260    4477 logs.go:123] Gathering logs for Docker ...
	I0803 18:06:21.287272    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:06:21.312181    4477 logs.go:123] Gathering logs for container status ...
	I0803 18:06:21.312191    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:06:21.323317    4477 logs.go:123] Gathering logs for dmesg ...
	I0803 18:06:21.323328    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:06:21.327597    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:06:21.327606    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0803 18:06:21.327630    4477 out.go:239] X Problems detected in kubelet:
	W0803 18:06:21.327634    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:06:21.327638    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:06:21.327642    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:06:21.327645    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:06:26.238438    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:06:26.238589    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:06:26.252605    4630 logs.go:276] 2 containers: [81ed1708fd6b db60aaba5af7]
	I0803 18:06:26.252678    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:06:26.263993    4630 logs.go:276] 2 containers: [09bbda970489 eaff7d840b96]
	I0803 18:06:26.264060    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:06:26.277942    4630 logs.go:276] 1 containers: [5f215627d79c]
	I0803 18:06:26.278006    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:06:26.289310    4630 logs.go:276] 2 containers: [135bcc7cf850 d4fb7551ff98]
	I0803 18:06:26.289375    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:06:26.299892    4630 logs.go:276] 1 containers: [6d2574ad3d0f]
	I0803 18:06:26.299957    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:06:26.310708    4630 logs.go:276] 2 containers: [718f7dff79a6 a41ac171ebac]
	I0803 18:06:26.310769    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:06:26.320948    4630 logs.go:276] 0 containers: []
	W0803 18:06:26.320958    4630 logs.go:278] No container was found matching "kindnet"
	I0803 18:06:26.321013    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:06:26.331366    4630 logs.go:276] 2 containers: [1431b39e7f18 1c409466f72c]
	I0803 18:06:26.331383    4630 logs.go:123] Gathering logs for etcd [eaff7d840b96] ...
	I0803 18:06:26.331388    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaff7d840b96"
	I0803 18:06:26.345864    4630 logs.go:123] Gathering logs for kube-proxy [6d2574ad3d0f] ...
	I0803 18:06:26.345875    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2574ad3d0f"
	I0803 18:06:26.357593    4630 logs.go:123] Gathering logs for kube-controller-manager [718f7dff79a6] ...
	I0803 18:06:26.357603    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 718f7dff79a6"
	I0803 18:06:26.375558    4630 logs.go:123] Gathering logs for storage-provisioner [1431b39e7f18] ...
	I0803 18:06:26.375570    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1431b39e7f18"
	I0803 18:06:26.388617    4630 logs.go:123] Gathering logs for etcd [09bbda970489] ...
	I0803 18:06:26.388628    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09bbda970489"
	I0803 18:06:26.401943    4630 logs.go:123] Gathering logs for kube-scheduler [135bcc7cf850] ...
	I0803 18:06:26.401954    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 135bcc7cf850"
	I0803 18:06:26.415609    4630 logs.go:123] Gathering logs for storage-provisioner [1c409466f72c] ...
	I0803 18:06:26.415620    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c409466f72c"
	I0803 18:06:26.426973    4630 logs.go:123] Gathering logs for container status ...
	I0803 18:06:26.426985    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:06:26.438966    4630 logs.go:123] Gathering logs for dmesg ...
	I0803 18:06:26.438976    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:06:26.445076    4630 logs.go:123] Gathering logs for coredns [5f215627d79c] ...
	I0803 18:06:26.445082    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f215627d79c"
	I0803 18:06:26.456046    4630 logs.go:123] Gathering logs for kube-scheduler [d4fb7551ff98] ...
	I0803 18:06:26.456057    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4fb7551ff98"
	I0803 18:06:26.471167    4630 logs.go:123] Gathering logs for kube-controller-manager [a41ac171ebac] ...
	I0803 18:06:26.471180    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41ac171ebac"
	I0803 18:06:26.483554    4630 logs.go:123] Gathering logs for kube-apiserver [81ed1708fd6b] ...
	I0803 18:06:26.483568    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ed1708fd6b"
	I0803 18:06:26.498968    4630 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:06:26.498981    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:06:26.533879    4630 logs.go:123] Gathering logs for kube-apiserver [db60aaba5af7] ...
	I0803 18:06:26.533890    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db60aaba5af7"
	I0803 18:06:26.572560    4630 logs.go:123] Gathering logs for Docker ...
	I0803 18:06:26.572574    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:06:26.595754    4630 logs.go:123] Gathering logs for kubelet ...
	I0803 18:06:26.595761    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 18:06:29.135290    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:06:31.331552    4477 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:06:34.137367    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:06:34.137575    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:06:34.158817    4630 logs.go:276] 2 containers: [81ed1708fd6b db60aaba5af7]
	I0803 18:06:34.158925    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:06:34.175041    4630 logs.go:276] 2 containers: [09bbda970489 eaff7d840b96]
	I0803 18:06:34.175124    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:06:34.188614    4630 logs.go:276] 1 containers: [5f215627d79c]
	I0803 18:06:34.188687    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:06:34.199545    4630 logs.go:276] 2 containers: [135bcc7cf850 d4fb7551ff98]
	I0803 18:06:34.199619    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:06:34.210272    4630 logs.go:276] 1 containers: [6d2574ad3d0f]
	I0803 18:06:34.210341    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:06:34.220619    4630 logs.go:276] 2 containers: [718f7dff79a6 a41ac171ebac]
	I0803 18:06:34.220681    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:06:34.230763    4630 logs.go:276] 0 containers: []
	W0803 18:06:34.230777    4630 logs.go:278] No container was found matching "kindnet"
	I0803 18:06:34.230836    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:06:34.249198    4630 logs.go:276] 2 containers: [1431b39e7f18 1c409466f72c]
	I0803 18:06:34.249220    4630 logs.go:123] Gathering logs for coredns [5f215627d79c] ...
	I0803 18:06:34.249226    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f215627d79c"
	I0803 18:06:34.261348    4630 logs.go:123] Gathering logs for kube-scheduler [d4fb7551ff98] ...
	I0803 18:06:34.261360    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4fb7551ff98"
	I0803 18:06:34.281589    4630 logs.go:123] Gathering logs for kube-controller-manager [a41ac171ebac] ...
	I0803 18:06:34.281600    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41ac171ebac"
	I0803 18:06:34.306734    4630 logs.go:123] Gathering logs for storage-provisioner [1431b39e7f18] ...
	I0803 18:06:34.306749    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1431b39e7f18"
	I0803 18:06:34.321817    4630 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:06:34.321828    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:06:34.356929    4630 logs.go:123] Gathering logs for etcd [09bbda970489] ...
	I0803 18:06:34.356944    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09bbda970489"
	I0803 18:06:34.370737    4630 logs.go:123] Gathering logs for kube-proxy [6d2574ad3d0f] ...
	I0803 18:06:34.370749    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2574ad3d0f"
	I0803 18:06:34.382831    4630 logs.go:123] Gathering logs for container status ...
	I0803 18:06:34.382845    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:06:34.394844    4630 logs.go:123] Gathering logs for kubelet ...
	I0803 18:06:34.394855    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 18:06:34.434772    4630 logs.go:123] Gathering logs for dmesg ...
	I0803 18:06:34.434782    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:06:34.438939    4630 logs.go:123] Gathering logs for kube-apiserver [81ed1708fd6b] ...
	I0803 18:06:34.438947    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ed1708fd6b"
	I0803 18:06:34.453134    4630 logs.go:123] Gathering logs for storage-provisioner [1c409466f72c] ...
	I0803 18:06:34.453144    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c409466f72c"
	I0803 18:06:34.464266    4630 logs.go:123] Gathering logs for kube-apiserver [db60aaba5af7] ...
	I0803 18:06:34.464278    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db60aaba5af7"
	I0803 18:06:34.500916    4630 logs.go:123] Gathering logs for etcd [eaff7d840b96] ...
	I0803 18:06:34.500926    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaff7d840b96"
	I0803 18:06:34.515498    4630 logs.go:123] Gathering logs for kube-scheduler [135bcc7cf850] ...
	I0803 18:06:34.515507    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 135bcc7cf850"
	I0803 18:06:34.533678    4630 logs.go:123] Gathering logs for kube-controller-manager [718f7dff79a6] ...
	I0803 18:06:34.533692    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 718f7dff79a6"
	I0803 18:06:34.554691    4630 logs.go:123] Gathering logs for Docker ...
	I0803 18:06:34.554705    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:06:37.079788    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:06:36.333705    4477 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0803 18:06:36.333817    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:06:36.345236    4477 logs.go:276] 1 containers: [61b3a63eaddc]
	I0803 18:06:36.345299    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:06:36.355983    4477 logs.go:276] 1 containers: [b561e504a901]
	I0803 18:06:36.356057    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:06:36.366719    4477 logs.go:276] 2 containers: [bcbb40889ca3 3d1437e6d6fc]
	I0803 18:06:36.366792    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:06:36.377147    4477 logs.go:276] 1 containers: [1d5103a7d136]
	I0803 18:06:36.377220    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:06:36.387215    4477 logs.go:276] 1 containers: [a422dda53c75]
	I0803 18:06:36.387278    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:06:36.397881    4477 logs.go:276] 1 containers: [01a42d8e56a8]
	I0803 18:06:36.397948    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:06:36.407766    4477 logs.go:276] 0 containers: []
	W0803 18:06:36.407780    4477 logs.go:278] No container was found matching "kindnet"
	I0803 18:06:36.407835    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:06:36.418507    4477 logs.go:276] 1 containers: [3318cf46c892]
	I0803 18:06:36.418527    4477 logs.go:123] Gathering logs for kubelet ...
	I0803 18:06:36.418532    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0803 18:06:36.436954    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:06:36.437048    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:06:36.452853    4477 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:06:36.452862    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:06:36.491706    4477 logs.go:123] Gathering logs for coredns [3d1437e6d6fc] ...
	I0803 18:06:36.491717    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d1437e6d6fc"
	I0803 18:06:36.503239    4477 logs.go:123] Gathering logs for kube-scheduler [1d5103a7d136] ...
	I0803 18:06:36.503252    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d5103a7d136"
	I0803 18:06:36.522289    4477 logs.go:123] Gathering logs for kube-controller-manager [01a42d8e56a8] ...
	I0803 18:06:36.522302    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01a42d8e56a8"
	I0803 18:06:36.539745    4477 logs.go:123] Gathering logs for Docker ...
	I0803 18:06:36.539756    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:06:36.564557    4477 logs.go:123] Gathering logs for dmesg ...
	I0803 18:06:36.564567    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:06:36.568869    4477 logs.go:123] Gathering logs for kube-apiserver [61b3a63eaddc] ...
	I0803 18:06:36.568879    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61b3a63eaddc"
	I0803 18:06:36.583416    4477 logs.go:123] Gathering logs for etcd [b561e504a901] ...
	I0803 18:06:36.583425    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b561e504a901"
	I0803 18:06:36.597508    4477 logs.go:123] Gathering logs for coredns [bcbb40889ca3] ...
	I0803 18:06:36.597519    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcbb40889ca3"
	I0803 18:06:36.612953    4477 logs.go:123] Gathering logs for kube-proxy [a422dda53c75] ...
	I0803 18:06:36.612969    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a422dda53c75"
	I0803 18:06:36.624544    4477 logs.go:123] Gathering logs for storage-provisioner [3318cf46c892] ...
	I0803 18:06:36.624556    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3318cf46c892"
	I0803 18:06:36.636423    4477 logs.go:123] Gathering logs for container status ...
	I0803 18:06:36.636442    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:06:36.647906    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:06:36.647917    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0803 18:06:36.647946    4477 out.go:239] X Problems detected in kubelet:
	W0803 18:06:36.647950    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:06:36.647954    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:06:36.647957    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:06:36.647961    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:06:42.081916    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:06:42.082164    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:06:42.102991    4630 logs.go:276] 2 containers: [81ed1708fd6b db60aaba5af7]
	I0803 18:06:42.103086    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:06:42.122623    4630 logs.go:276] 2 containers: [09bbda970489 eaff7d840b96]
	I0803 18:06:42.122701    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:06:42.134466    4630 logs.go:276] 1 containers: [5f215627d79c]
	I0803 18:06:42.134532    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:06:42.146692    4630 logs.go:276] 2 containers: [135bcc7cf850 d4fb7551ff98]
	I0803 18:06:42.146768    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:06:42.157402    4630 logs.go:276] 1 containers: [6d2574ad3d0f]
	I0803 18:06:42.157467    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:06:42.167973    4630 logs.go:276] 2 containers: [718f7dff79a6 a41ac171ebac]
	I0803 18:06:42.168049    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:06:42.181194    4630 logs.go:276] 0 containers: []
	W0803 18:06:42.181208    4630 logs.go:278] No container was found matching "kindnet"
	I0803 18:06:42.181267    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:06:42.197112    4630 logs.go:276] 2 containers: [1431b39e7f18 1c409466f72c]
	I0803 18:06:42.197133    4630 logs.go:123] Gathering logs for etcd [09bbda970489] ...
	I0803 18:06:42.197140    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09bbda970489"
	I0803 18:06:42.216668    4630 logs.go:123] Gathering logs for kube-scheduler [d4fb7551ff98] ...
	I0803 18:06:42.216683    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4fb7551ff98"
	I0803 18:06:42.231448    4630 logs.go:123] Gathering logs for storage-provisioner [1c409466f72c] ...
	I0803 18:06:42.231460    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c409466f72c"
	I0803 18:06:42.243316    4630 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:06:42.243329    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:06:42.279660    4630 logs.go:123] Gathering logs for kube-apiserver [db60aaba5af7] ...
	I0803 18:06:42.279672    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db60aaba5af7"
	I0803 18:06:42.317905    4630 logs.go:123] Gathering logs for kube-proxy [6d2574ad3d0f] ...
	I0803 18:06:42.317920    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2574ad3d0f"
	I0803 18:06:42.330923    4630 logs.go:123] Gathering logs for Docker ...
	I0803 18:06:42.330936    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:06:42.354254    4630 logs.go:123] Gathering logs for kubelet ...
	I0803 18:06:42.354261    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 18:06:42.392148    4630 logs.go:123] Gathering logs for kube-apiserver [81ed1708fd6b] ...
	I0803 18:06:42.392157    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ed1708fd6b"
	I0803 18:06:42.406269    4630 logs.go:123] Gathering logs for kube-scheduler [135bcc7cf850] ...
	I0803 18:06:42.406281    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 135bcc7cf850"
	I0803 18:06:42.436501    4630 logs.go:123] Gathering logs for kube-controller-manager [a41ac171ebac] ...
	I0803 18:06:42.436511    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41ac171ebac"
	I0803 18:06:42.456787    4630 logs.go:123] Gathering logs for container status ...
	I0803 18:06:42.456800    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:06:42.474522    4630 logs.go:123] Gathering logs for dmesg ...
	I0803 18:06:42.474532    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:06:42.478588    4630 logs.go:123] Gathering logs for etcd [eaff7d840b96] ...
	I0803 18:06:42.478594    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaff7d840b96"
	I0803 18:06:42.492659    4630 logs.go:123] Gathering logs for coredns [5f215627d79c] ...
	I0803 18:06:42.492671    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f215627d79c"
	I0803 18:06:42.503849    4630 logs.go:123] Gathering logs for kube-controller-manager [718f7dff79a6] ...
	I0803 18:06:42.503858    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 718f7dff79a6"
	I0803 18:06:42.521699    4630 logs.go:123] Gathering logs for storage-provisioner [1431b39e7f18] ...
	I0803 18:06:42.521712    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1431b39e7f18"
	I0803 18:06:45.036735    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:06:46.651919    4477 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:06:50.039195    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:06:50.039366    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:06:50.056935    4630 logs.go:276] 2 containers: [81ed1708fd6b db60aaba5af7]
	I0803 18:06:50.057035    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:06:50.070301    4630 logs.go:276] 2 containers: [09bbda970489 eaff7d840b96]
	I0803 18:06:50.070370    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:06:50.085157    4630 logs.go:276] 1 containers: [5f215627d79c]
	I0803 18:06:50.085225    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:06:50.095366    4630 logs.go:276] 2 containers: [135bcc7cf850 d4fb7551ff98]
	I0803 18:06:50.095436    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:06:50.105878    4630 logs.go:276] 1 containers: [6d2574ad3d0f]
	I0803 18:06:50.105944    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:06:50.116778    4630 logs.go:276] 2 containers: [718f7dff79a6 a41ac171ebac]
	I0803 18:06:50.116840    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:06:50.126633    4630 logs.go:276] 0 containers: []
	W0803 18:06:50.126642    4630 logs.go:278] No container was found matching "kindnet"
	I0803 18:06:50.126693    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:06:50.137562    4630 logs.go:276] 2 containers: [1431b39e7f18 1c409466f72c]
	I0803 18:06:50.137580    4630 logs.go:123] Gathering logs for kube-apiserver [81ed1708fd6b] ...
	I0803 18:06:50.137586    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ed1708fd6b"
	I0803 18:06:50.151989    4630 logs.go:123] Gathering logs for kube-proxy [6d2574ad3d0f] ...
	I0803 18:06:50.151999    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2574ad3d0f"
	I0803 18:06:50.163907    4630 logs.go:123] Gathering logs for storage-provisioner [1431b39e7f18] ...
	I0803 18:06:50.163920    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1431b39e7f18"
	I0803 18:06:50.175106    4630 logs.go:123] Gathering logs for storage-provisioner [1c409466f72c] ...
	I0803 18:06:50.175119    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c409466f72c"
	I0803 18:06:50.189044    4630 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:06:50.189055    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:06:50.224355    4630 logs.go:123] Gathering logs for kube-apiserver [db60aaba5af7] ...
	I0803 18:06:50.224370    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db60aaba5af7"
	I0803 18:06:50.263442    4630 logs.go:123] Gathering logs for etcd [09bbda970489] ...
	I0803 18:06:50.263454    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09bbda970489"
	I0803 18:06:50.277927    4630 logs.go:123] Gathering logs for kube-controller-manager [a41ac171ebac] ...
	I0803 18:06:50.277938    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41ac171ebac"
	I0803 18:06:50.289910    4630 logs.go:123] Gathering logs for container status ...
	I0803 18:06:50.289920    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:06:50.301937    4630 logs.go:123] Gathering logs for kubelet ...
	I0803 18:06:50.301946    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 18:06:50.341700    4630 logs.go:123] Gathering logs for etcd [eaff7d840b96] ...
	I0803 18:06:50.341712    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaff7d840b96"
	I0803 18:06:50.359477    4630 logs.go:123] Gathering logs for coredns [5f215627d79c] ...
	I0803 18:06:50.359487    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f215627d79c"
	I0803 18:06:50.379569    4630 logs.go:123] Gathering logs for kube-controller-manager [718f7dff79a6] ...
	I0803 18:06:50.379581    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 718f7dff79a6"
	I0803 18:06:50.397290    4630 logs.go:123] Gathering logs for Docker ...
	I0803 18:06:50.397303    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:06:50.420170    4630 logs.go:123] Gathering logs for dmesg ...
	I0803 18:06:50.420177    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:06:50.424166    4630 logs.go:123] Gathering logs for kube-scheduler [135bcc7cf850] ...
	I0803 18:06:50.424173    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 135bcc7cf850"
	I0803 18:06:50.438156    4630 logs.go:123] Gathering logs for kube-scheduler [d4fb7551ff98] ...
	I0803 18:06:50.438166    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4fb7551ff98"
	I0803 18:06:52.954709    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:06:51.652162    4477 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:06:51.652338    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:06:51.666559    4477 logs.go:276] 1 containers: [61b3a63eaddc]
	I0803 18:06:51.666644    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:06:51.678520    4477 logs.go:276] 1 containers: [b561e504a901]
	I0803 18:06:51.678594    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:06:51.689499    4477 logs.go:276] 4 containers: [03f5f9344fc4 ac564f34c2d8 bcbb40889ca3 3d1437e6d6fc]
	I0803 18:06:51.689577    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:06:51.699992    4477 logs.go:276] 1 containers: [1d5103a7d136]
	I0803 18:06:51.700061    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:06:51.716118    4477 logs.go:276] 1 containers: [a422dda53c75]
	I0803 18:06:51.716189    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:06:51.726870    4477 logs.go:276] 1 containers: [01a42d8e56a8]
	I0803 18:06:51.726944    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:06:51.737554    4477 logs.go:276] 0 containers: []
	W0803 18:06:51.737565    4477 logs.go:278] No container was found matching "kindnet"
	I0803 18:06:51.737622    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:06:51.748494    4477 logs.go:276] 1 containers: [3318cf46c892]
	I0803 18:06:51.748509    4477 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:06:51.748514    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:06:51.783595    4477 logs.go:123] Gathering logs for coredns [ac564f34c2d8] ...
	I0803 18:06:51.783610    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac564f34c2d8"
	I0803 18:06:51.794802    4477 logs.go:123] Gathering logs for kube-controller-manager [01a42d8e56a8] ...
	I0803 18:06:51.794816    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01a42d8e56a8"
	I0803 18:06:51.821594    4477 logs.go:123] Gathering logs for kubelet ...
	I0803 18:06:51.821606    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0803 18:06:51.839063    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:06:51.839154    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:06:51.854720    4477 logs.go:123] Gathering logs for coredns [bcbb40889ca3] ...
	I0803 18:06:51.854725    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcbb40889ca3"
	I0803 18:06:51.868020    4477 logs.go:123] Gathering logs for kube-proxy [a422dda53c75] ...
	I0803 18:06:51.868029    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a422dda53c75"
	I0803 18:06:51.880030    4477 logs.go:123] Gathering logs for container status ...
	I0803 18:06:51.880041    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:06:51.891678    4477 logs.go:123] Gathering logs for dmesg ...
	I0803 18:06:51.891690    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:06:51.896032    4477 logs.go:123] Gathering logs for kube-apiserver [61b3a63eaddc] ...
	I0803 18:06:51.896039    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61b3a63eaddc"
	I0803 18:06:51.910582    4477 logs.go:123] Gathering logs for coredns [03f5f9344fc4] ...
	I0803 18:06:51.910593    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03f5f9344fc4"
	I0803 18:06:51.922154    4477 logs.go:123] Gathering logs for coredns [3d1437e6d6fc] ...
	I0803 18:06:51.922167    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d1437e6d6fc"
	I0803 18:06:51.933996    4477 logs.go:123] Gathering logs for storage-provisioner [3318cf46c892] ...
	I0803 18:06:51.934006    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3318cf46c892"
	I0803 18:06:51.948007    4477 logs.go:123] Gathering logs for Docker ...
	I0803 18:06:51.948018    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:06:51.971776    4477 logs.go:123] Gathering logs for etcd [b561e504a901] ...
	I0803 18:06:51.971785    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b561e504a901"
	I0803 18:06:51.985408    4477 logs.go:123] Gathering logs for kube-scheduler [1d5103a7d136] ...
	I0803 18:06:51.985418    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d5103a7d136"
	I0803 18:06:52.000427    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:06:52.000438    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0803 18:06:52.000466    4477 out.go:239] X Problems detected in kubelet:
	W0803 18:06:52.000470    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:06:52.000474    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:06:52.000502    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:06:52.000506    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:06:57.956947    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:06:57.957102    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:06:57.972774    4630 logs.go:276] 2 containers: [81ed1708fd6b db60aaba5af7]
	I0803 18:06:57.972864    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:06:57.985737    4630 logs.go:276] 2 containers: [09bbda970489 eaff7d840b96]
	I0803 18:06:57.985814    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:06:57.996911    4630 logs.go:276] 1 containers: [5f215627d79c]
	I0803 18:06:57.996976    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:06:58.007521    4630 logs.go:276] 2 containers: [135bcc7cf850 d4fb7551ff98]
	I0803 18:06:58.007588    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:06:58.018439    4630 logs.go:276] 1 containers: [6d2574ad3d0f]
	I0803 18:06:58.018502    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:06:58.029769    4630 logs.go:276] 2 containers: [718f7dff79a6 a41ac171ebac]
	I0803 18:06:58.029844    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:06:58.041325    4630 logs.go:276] 0 containers: []
	W0803 18:06:58.041336    4630 logs.go:278] No container was found matching "kindnet"
	I0803 18:06:58.041405    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:06:58.051946    4630 logs.go:276] 2 containers: [1431b39e7f18 1c409466f72c]
	I0803 18:06:58.051961    4630 logs.go:123] Gathering logs for kube-proxy [6d2574ad3d0f] ...
	I0803 18:06:58.051967    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2574ad3d0f"
	I0803 18:06:58.063327    4630 logs.go:123] Gathering logs for storage-provisioner [1431b39e7f18] ...
	I0803 18:06:58.063338    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1431b39e7f18"
	I0803 18:06:58.074822    4630 logs.go:123] Gathering logs for container status ...
	I0803 18:06:58.074834    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:06:58.086610    4630 logs.go:123] Gathering logs for kube-controller-manager [718f7dff79a6] ...
	I0803 18:06:58.086621    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 718f7dff79a6"
	I0803 18:06:58.103893    4630 logs.go:123] Gathering logs for storage-provisioner [1c409466f72c] ...
	I0803 18:06:58.103904    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c409466f72c"
	I0803 18:06:58.116243    4630 logs.go:123] Gathering logs for dmesg ...
	I0803 18:06:58.116257    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:06:58.120398    4630 logs.go:123] Gathering logs for kube-apiserver [81ed1708fd6b] ...
	I0803 18:06:58.120404    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ed1708fd6b"
	I0803 18:06:58.138709    4630 logs.go:123] Gathering logs for coredns [5f215627d79c] ...
	I0803 18:06:58.138721    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f215627d79c"
	I0803 18:06:58.150644    4630 logs.go:123] Gathering logs for kube-scheduler [135bcc7cf850] ...
	I0803 18:06:58.150659    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 135bcc7cf850"
	I0803 18:06:58.164212    4630 logs.go:123] Gathering logs for kube-scheduler [d4fb7551ff98] ...
	I0803 18:06:58.164224    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4fb7551ff98"
	I0803 18:06:58.185188    4630 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:06:58.185198    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:06:58.220150    4630 logs.go:123] Gathering logs for kube-apiserver [db60aaba5af7] ...
	I0803 18:06:58.220161    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db60aaba5af7"
	I0803 18:06:58.258082    4630 logs.go:123] Gathering logs for etcd [eaff7d840b96] ...
	I0803 18:06:58.258093    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaff7d840b96"
	I0803 18:06:58.272668    4630 logs.go:123] Gathering logs for kube-controller-manager [a41ac171ebac] ...
	I0803 18:06:58.272678    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41ac171ebac"
	I0803 18:06:58.285446    4630 logs.go:123] Gathering logs for kubelet ...
	I0803 18:06:58.285457    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 18:06:58.322498    4630 logs.go:123] Gathering logs for etcd [09bbda970489] ...
	I0803 18:06:58.322507    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09bbda970489"
	I0803 18:06:58.336590    4630 logs.go:123] Gathering logs for Docker ...
	I0803 18:06:58.336602    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:07:00.862486    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:07:02.003067    4477 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:07:05.864643    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:07:05.864831    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:07:05.879854    4630 logs.go:276] 2 containers: [81ed1708fd6b db60aaba5af7]
	I0803 18:07:05.879938    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:07:05.891698    4630 logs.go:276] 2 containers: [09bbda970489 eaff7d840b96]
	I0803 18:07:05.891769    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:07:05.902247    4630 logs.go:276] 1 containers: [5f215627d79c]
	I0803 18:07:05.902318    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:07:05.916681    4630 logs.go:276] 2 containers: [135bcc7cf850 d4fb7551ff98]
	I0803 18:07:05.916789    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:07:05.927245    4630 logs.go:276] 1 containers: [6d2574ad3d0f]
	I0803 18:07:05.927308    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:07:05.938362    4630 logs.go:276] 2 containers: [718f7dff79a6 a41ac171ebac]
	I0803 18:07:05.938435    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:07:05.949854    4630 logs.go:276] 0 containers: []
	W0803 18:07:05.949866    4630 logs.go:278] No container was found matching "kindnet"
	I0803 18:07:05.949925    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:07:05.960453    4630 logs.go:276] 2 containers: [1431b39e7f18 1c409466f72c]
	I0803 18:07:05.960471    4630 logs.go:123] Gathering logs for dmesg ...
	I0803 18:07:05.960476    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:07:05.964607    4630 logs.go:123] Gathering logs for etcd [eaff7d840b96] ...
	I0803 18:07:05.964613    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaff7d840b96"
	I0803 18:07:05.979743    4630 logs.go:123] Gathering logs for kube-scheduler [d4fb7551ff98] ...
	I0803 18:07:05.979760    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4fb7551ff98"
	I0803 18:07:05.994470    4630 logs.go:123] Gathering logs for container status ...
	I0803 18:07:05.994483    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:07:06.009978    4630 logs.go:123] Gathering logs for storage-provisioner [1431b39e7f18] ...
	I0803 18:07:06.009995    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1431b39e7f18"
	I0803 18:07:06.021601    4630 logs.go:123] Gathering logs for storage-provisioner [1c409466f72c] ...
	I0803 18:07:06.021615    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c409466f72c"
	I0803 18:07:06.036173    4630 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:07:06.036184    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:07:06.071045    4630 logs.go:123] Gathering logs for etcd [09bbda970489] ...
	I0803 18:07:06.071056    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09bbda970489"
	I0803 18:07:06.085173    4630 logs.go:123] Gathering logs for kube-scheduler [135bcc7cf850] ...
	I0803 18:07:06.085184    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 135bcc7cf850"
	I0803 18:07:06.098728    4630 logs.go:123] Gathering logs for kube-proxy [6d2574ad3d0f] ...
	I0803 18:07:06.098742    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2574ad3d0f"
	I0803 18:07:06.110756    4630 logs.go:123] Gathering logs for kube-controller-manager [718f7dff79a6] ...
	I0803 18:07:06.110767    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 718f7dff79a6"
	I0803 18:07:06.129652    4630 logs.go:123] Gathering logs for kubelet ...
	I0803 18:07:06.129662    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 18:07:06.169463    4630 logs.go:123] Gathering logs for kube-apiserver [81ed1708fd6b] ...
	I0803 18:07:06.169473    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ed1708fd6b"
	I0803 18:07:06.185088    4630 logs.go:123] Gathering logs for kube-apiserver [db60aaba5af7] ...
	I0803 18:07:06.185098    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db60aaba5af7"
	I0803 18:07:06.223238    4630 logs.go:123] Gathering logs for kube-controller-manager [a41ac171ebac] ...
	I0803 18:07:06.223247    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41ac171ebac"
	I0803 18:07:06.235459    4630 logs.go:123] Gathering logs for Docker ...
	I0803 18:07:06.235471    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:07:06.257721    4630 logs.go:123] Gathering logs for coredns [5f215627d79c] ...
	I0803 18:07:06.257730    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f215627d79c"
	I0803 18:07:07.005261    4477 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:07:07.005425    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:07:07.023725    4477 logs.go:276] 1 containers: [61b3a63eaddc]
	I0803 18:07:07.023810    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:07:07.037023    4477 logs.go:276] 1 containers: [b561e504a901]
	I0803 18:07:07.037097    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:07:07.049375    4477 logs.go:276] 4 containers: [03f5f9344fc4 ac564f34c2d8 bcbb40889ca3 3d1437e6d6fc]
	I0803 18:07:07.049442    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:07:07.059936    4477 logs.go:276] 1 containers: [1d5103a7d136]
	I0803 18:07:07.059996    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:07:07.070409    4477 logs.go:276] 1 containers: [a422dda53c75]
	I0803 18:07:07.070480    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:07:07.081317    4477 logs.go:276] 1 containers: [01a42d8e56a8]
	I0803 18:07:07.081380    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:07:07.098160    4477 logs.go:276] 0 containers: []
	W0803 18:07:07.098172    4477 logs.go:278] No container was found matching "kindnet"
	I0803 18:07:07.098233    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:07:07.108577    4477 logs.go:276] 1 containers: [3318cf46c892]
	I0803 18:07:07.108595    4477 logs.go:123] Gathering logs for dmesg ...
	I0803 18:07:07.108602    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:07:07.113521    4477 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:07:07.113528    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:07:07.148817    4477 logs.go:123] Gathering logs for kube-apiserver [61b3a63eaddc] ...
	I0803 18:07:07.148828    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61b3a63eaddc"
	I0803 18:07:07.163312    4477 logs.go:123] Gathering logs for coredns [ac564f34c2d8] ...
	I0803 18:07:07.163325    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac564f34c2d8"
	I0803 18:07:07.175544    4477 logs.go:123] Gathering logs for coredns [bcbb40889ca3] ...
	I0803 18:07:07.175557    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcbb40889ca3"
	I0803 18:07:07.186949    4477 logs.go:123] Gathering logs for kube-scheduler [1d5103a7d136] ...
	I0803 18:07:07.186958    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d5103a7d136"
	I0803 18:07:07.201917    4477 logs.go:123] Gathering logs for Docker ...
	I0803 18:07:07.201927    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:07:07.227975    4477 logs.go:123] Gathering logs for kubelet ...
	I0803 18:07:07.227983    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0803 18:07:07.247190    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:07:07.247281    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:07:07.263062    4477 logs.go:123] Gathering logs for coredns [03f5f9344fc4] ...
	I0803 18:07:07.263072    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03f5f9344fc4"
	I0803 18:07:07.275473    4477 logs.go:123] Gathering logs for kube-proxy [a422dda53c75] ...
	I0803 18:07:07.275484    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a422dda53c75"
	I0803 18:07:07.286953    4477 logs.go:123] Gathering logs for etcd [b561e504a901] ...
	I0803 18:07:07.286964    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b561e504a901"
	I0803 18:07:07.300763    4477 logs.go:123] Gathering logs for coredns [3d1437e6d6fc] ...
	I0803 18:07:07.300773    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d1437e6d6fc"
	I0803 18:07:07.312241    4477 logs.go:123] Gathering logs for kube-controller-manager [01a42d8e56a8] ...
	I0803 18:07:07.312253    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01a42d8e56a8"
	I0803 18:07:07.331858    4477 logs.go:123] Gathering logs for storage-provisioner [3318cf46c892] ...
	I0803 18:07:07.331869    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3318cf46c892"
	I0803 18:07:07.343319    4477 logs.go:123] Gathering logs for container status ...
	I0803 18:07:07.343330    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:07:07.355615    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:07:07.355629    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0803 18:07:07.355655    4477 out.go:239] X Problems detected in kubelet:
	W0803 18:07:07.355660    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:07:07.355664    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:07:07.355668    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:07:07.355671    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:07:08.771501    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:07:13.773618    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:07:13.773837    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:07:13.791657    4630 logs.go:276] 2 containers: [81ed1708fd6b db60aaba5af7]
	I0803 18:07:13.791753    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:07:13.805625    4630 logs.go:276] 2 containers: [09bbda970489 eaff7d840b96]
	I0803 18:07:13.805701    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:07:13.817620    4630 logs.go:276] 1 containers: [5f215627d79c]
	I0803 18:07:13.817688    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:07:13.828602    4630 logs.go:276] 2 containers: [135bcc7cf850 d4fb7551ff98]
	I0803 18:07:13.828673    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:07:13.839133    4630 logs.go:276] 1 containers: [6d2574ad3d0f]
	I0803 18:07:13.839207    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:07:13.849875    4630 logs.go:276] 2 containers: [718f7dff79a6 a41ac171ebac]
	I0803 18:07:13.849945    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:07:13.860488    4630 logs.go:276] 0 containers: []
	W0803 18:07:13.860501    4630 logs.go:278] No container was found matching "kindnet"
	I0803 18:07:13.860555    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:07:13.870902    4630 logs.go:276] 2 containers: [1431b39e7f18 1c409466f72c]
	I0803 18:07:13.870919    4630 logs.go:123] Gathering logs for etcd [eaff7d840b96] ...
	I0803 18:07:13.870925    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaff7d840b96"
	I0803 18:07:13.885722    4630 logs.go:123] Gathering logs for coredns [5f215627d79c] ...
	I0803 18:07:13.885736    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f215627d79c"
	I0803 18:07:13.901584    4630 logs.go:123] Gathering logs for kube-controller-manager [a41ac171ebac] ...
	I0803 18:07:13.901600    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41ac171ebac"
	I0803 18:07:13.914115    4630 logs.go:123] Gathering logs for dmesg ...
	I0803 18:07:13.914126    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:07:13.918386    4630 logs.go:123] Gathering logs for kube-apiserver [db60aaba5af7] ...
	I0803 18:07:13.918394    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db60aaba5af7"
	I0803 18:07:13.958268    4630 logs.go:123] Gathering logs for kube-proxy [6d2574ad3d0f] ...
	I0803 18:07:13.958278    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2574ad3d0f"
	I0803 18:07:13.970348    4630 logs.go:123] Gathering logs for kube-controller-manager [718f7dff79a6] ...
	I0803 18:07:13.970358    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 718f7dff79a6"
	I0803 18:07:13.989582    4630 logs.go:123] Gathering logs for storage-provisioner [1431b39e7f18] ...
	I0803 18:07:13.989592    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1431b39e7f18"
	I0803 18:07:14.003725    4630 logs.go:123] Gathering logs for storage-provisioner [1c409466f72c] ...
	I0803 18:07:14.003736    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c409466f72c"
	I0803 18:07:14.015002    4630 logs.go:123] Gathering logs for Docker ...
	I0803 18:07:14.015013    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:07:14.037263    4630 logs.go:123] Gathering logs for container status ...
	I0803 18:07:14.037271    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:07:14.049420    4630 logs.go:123] Gathering logs for kube-apiserver [81ed1708fd6b] ...
	I0803 18:07:14.049431    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ed1708fd6b"
	I0803 18:07:14.063995    4630 logs.go:123] Gathering logs for kube-scheduler [135bcc7cf850] ...
	I0803 18:07:14.064007    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 135bcc7cf850"
	I0803 18:07:14.078130    4630 logs.go:123] Gathering logs for kube-scheduler [d4fb7551ff98] ...
	I0803 18:07:14.078146    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4fb7551ff98"
	I0803 18:07:14.093105    4630 logs.go:123] Gathering logs for kubelet ...
	I0803 18:07:14.093116    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 18:07:14.130666    4630 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:07:14.130674    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:07:14.166513    4630 logs.go:123] Gathering logs for etcd [09bbda970489] ...
	I0803 18:07:14.166525    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09bbda970489"
	I0803 18:07:16.684759    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:07:17.358423    4477 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:07:21.686858    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:07:21.686972    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:07:21.703868    4630 logs.go:276] 2 containers: [81ed1708fd6b db60aaba5af7]
	I0803 18:07:21.703941    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:07:21.714107    4630 logs.go:276] 2 containers: [09bbda970489 eaff7d840b96]
	I0803 18:07:21.714179    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:07:21.729457    4630 logs.go:276] 1 containers: [5f215627d79c]
	I0803 18:07:21.729527    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:07:21.740671    4630 logs.go:276] 2 containers: [135bcc7cf850 d4fb7551ff98]
	I0803 18:07:21.740737    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:07:21.751353    4630 logs.go:276] 1 containers: [6d2574ad3d0f]
	I0803 18:07:21.751424    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:07:21.761597    4630 logs.go:276] 2 containers: [718f7dff79a6 a41ac171ebac]
	I0803 18:07:21.761673    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:07:21.772344    4630 logs.go:276] 0 containers: []
	W0803 18:07:21.772356    4630 logs.go:278] No container was found matching "kindnet"
	I0803 18:07:21.772414    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:07:21.783250    4630 logs.go:276] 2 containers: [1431b39e7f18 1c409466f72c]
	I0803 18:07:21.783269    4630 logs.go:123] Gathering logs for kube-apiserver [81ed1708fd6b] ...
	I0803 18:07:21.783276    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ed1708fd6b"
	I0803 18:07:21.797200    4630 logs.go:123] Gathering logs for kube-scheduler [135bcc7cf850] ...
	I0803 18:07:21.797212    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 135bcc7cf850"
	I0803 18:07:21.810782    4630 logs.go:123] Gathering logs for kube-controller-manager [718f7dff79a6] ...
	I0803 18:07:21.810791    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 718f7dff79a6"
	I0803 18:07:21.828686    4630 logs.go:123] Gathering logs for storage-provisioner [1431b39e7f18] ...
	I0803 18:07:21.828696    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1431b39e7f18"
	I0803 18:07:21.844825    4630 logs.go:123] Gathering logs for Docker ...
	I0803 18:07:21.844838    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:07:21.867669    4630 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:07:21.867679    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:07:21.902580    4630 logs.go:123] Gathering logs for etcd [eaff7d840b96] ...
	I0803 18:07:21.902592    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaff7d840b96"
	I0803 18:07:21.916782    4630 logs.go:123] Gathering logs for kube-proxy [6d2574ad3d0f] ...
	I0803 18:07:21.916792    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2574ad3d0f"
	I0803 18:07:21.930060    4630 logs.go:123] Gathering logs for storage-provisioner [1c409466f72c] ...
	I0803 18:07:21.930070    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c409466f72c"
	I0803 18:07:21.941416    4630 logs.go:123] Gathering logs for dmesg ...
	I0803 18:07:21.941428    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:07:21.946404    4630 logs.go:123] Gathering logs for kube-apiserver [db60aaba5af7] ...
	I0803 18:07:21.946412    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db60aaba5af7"
	I0803 18:07:21.984050    4630 logs.go:123] Gathering logs for etcd [09bbda970489] ...
	I0803 18:07:21.984060    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09bbda970489"
	I0803 18:07:21.998031    4630 logs.go:123] Gathering logs for kube-scheduler [d4fb7551ff98] ...
	I0803 18:07:21.998041    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4fb7551ff98"
	I0803 18:07:22.013094    4630 logs.go:123] Gathering logs for kube-controller-manager [a41ac171ebac] ...
	I0803 18:07:22.013104    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41ac171ebac"
	I0803 18:07:22.029869    4630 logs.go:123] Gathering logs for kubelet ...
	I0803 18:07:22.029885    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 18:07:22.069356    4630 logs.go:123] Gathering logs for coredns [5f215627d79c] ...
	I0803 18:07:22.069366    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f215627d79c"
	I0803 18:07:22.080955    4630 logs.go:123] Gathering logs for container status ...
	I0803 18:07:22.080969    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:07:22.360593    4477 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:07:22.360708    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:07:22.371698    4477 logs.go:276] 1 containers: [61b3a63eaddc]
	I0803 18:07:22.371772    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:07:22.381930    4477 logs.go:276] 1 containers: [b561e504a901]
	I0803 18:07:22.382003    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:07:22.393164    4477 logs.go:276] 4 containers: [03f5f9344fc4 ac564f34c2d8 bcbb40889ca3 3d1437e6d6fc]
	I0803 18:07:22.393233    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:07:22.403626    4477 logs.go:276] 1 containers: [1d5103a7d136]
	I0803 18:07:22.403700    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:07:22.413910    4477 logs.go:276] 1 containers: [a422dda53c75]
	I0803 18:07:22.413980    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:07:22.431210    4477 logs.go:276] 1 containers: [01a42d8e56a8]
	I0803 18:07:22.431276    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:07:22.441378    4477 logs.go:276] 0 containers: []
	W0803 18:07:22.441395    4477 logs.go:278] No container was found matching "kindnet"
	I0803 18:07:22.441453    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:07:22.451787    4477 logs.go:276] 1 containers: [3318cf46c892]
	I0803 18:07:22.451804    4477 logs.go:123] Gathering logs for dmesg ...
	I0803 18:07:22.451810    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:07:22.456720    4477 logs.go:123] Gathering logs for etcd [b561e504a901] ...
	I0803 18:07:22.456726    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b561e504a901"
	I0803 18:07:22.470428    4477 logs.go:123] Gathering logs for kube-scheduler [1d5103a7d136] ...
	I0803 18:07:22.470442    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d5103a7d136"
	I0803 18:07:22.485662    4477 logs.go:123] Gathering logs for kube-controller-manager [01a42d8e56a8] ...
	I0803 18:07:22.485678    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01a42d8e56a8"
	I0803 18:07:22.504706    4477 logs.go:123] Gathering logs for container status ...
	I0803 18:07:22.504717    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:07:22.517107    4477 logs.go:123] Gathering logs for coredns [bcbb40889ca3] ...
	I0803 18:07:22.517118    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcbb40889ca3"
	I0803 18:07:22.529206    4477 logs.go:123] Gathering logs for kube-proxy [a422dda53c75] ...
	I0803 18:07:22.529221    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a422dda53c75"
	I0803 18:07:22.542630    4477 logs.go:123] Gathering logs for coredns [03f5f9344fc4] ...
	I0803 18:07:22.542643    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03f5f9344fc4"
	I0803 18:07:22.554681    4477 logs.go:123] Gathering logs for storage-provisioner [3318cf46c892] ...
	I0803 18:07:22.554691    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3318cf46c892"
	I0803 18:07:22.566725    4477 logs.go:123] Gathering logs for Docker ...
	I0803 18:07:22.566735    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:07:22.591152    4477 logs.go:123] Gathering logs for kubelet ...
	I0803 18:07:22.591167    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0803 18:07:22.609323    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:07:22.609420    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:07:22.625192    4477 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:07:22.625198    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:07:22.662162    4477 logs.go:123] Gathering logs for kube-apiserver [61b3a63eaddc] ...
	I0803 18:07:22.662175    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61b3a63eaddc"
	I0803 18:07:22.681241    4477 logs.go:123] Gathering logs for coredns [ac564f34c2d8] ...
	I0803 18:07:22.681260    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac564f34c2d8"
	I0803 18:07:22.700273    4477 logs.go:123] Gathering logs for coredns [3d1437e6d6fc] ...
	I0803 18:07:22.700287    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d1437e6d6fc"
	I0803 18:07:22.715589    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:07:22.715603    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0803 18:07:22.715636    4477 out.go:239] X Problems detected in kubelet:
	W0803 18:07:22.715643    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:07:22.715647    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:07:22.715651    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:07:22.715654    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:07:24.596015    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:07:29.598270    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:07:29.598542    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:07:29.624614    4630 logs.go:276] 2 containers: [81ed1708fd6b db60aaba5af7]
	I0803 18:07:29.624741    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:07:29.652036    4630 logs.go:276] 2 containers: [09bbda970489 eaff7d840b96]
	I0803 18:07:29.652112    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:07:29.663773    4630 logs.go:276] 1 containers: [5f215627d79c]
	I0803 18:07:29.663843    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:07:29.674239    4630 logs.go:276] 2 containers: [135bcc7cf850 d4fb7551ff98]
	I0803 18:07:29.674301    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:07:29.684829    4630 logs.go:276] 1 containers: [6d2574ad3d0f]
	I0803 18:07:29.684900    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:07:29.695404    4630 logs.go:276] 2 containers: [718f7dff79a6 a41ac171ebac]
	I0803 18:07:29.695474    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:07:29.705675    4630 logs.go:276] 0 containers: []
	W0803 18:07:29.705686    4630 logs.go:278] No container was found matching "kindnet"
	I0803 18:07:29.705744    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:07:29.716293    4630 logs.go:276] 2 containers: [1431b39e7f18 1c409466f72c]
	I0803 18:07:29.716308    4630 logs.go:123] Gathering logs for kubelet ...
	I0803 18:07:29.716314    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 18:07:29.753799    4630 logs.go:123] Gathering logs for kube-proxy [6d2574ad3d0f] ...
	I0803 18:07:29.753806    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2574ad3d0f"
	I0803 18:07:29.765597    4630 logs.go:123] Gathering logs for storage-provisioner [1c409466f72c] ...
	I0803 18:07:29.765607    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c409466f72c"
	I0803 18:07:29.777807    4630 logs.go:123] Gathering logs for etcd [eaff7d840b96] ...
	I0803 18:07:29.777821    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaff7d840b96"
	I0803 18:07:29.792328    4630 logs.go:123] Gathering logs for kube-controller-manager [a41ac171ebac] ...
	I0803 18:07:29.792341    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41ac171ebac"
	I0803 18:07:29.805448    4630 logs.go:123] Gathering logs for coredns [5f215627d79c] ...
	I0803 18:07:29.805458    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f215627d79c"
	I0803 18:07:29.817311    4630 logs.go:123] Gathering logs for kube-scheduler [d4fb7551ff98] ...
	I0803 18:07:29.817322    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4fb7551ff98"
	I0803 18:07:29.831831    4630 logs.go:123] Gathering logs for kube-controller-manager [718f7dff79a6] ...
	I0803 18:07:29.831840    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 718f7dff79a6"
	I0803 18:07:29.849358    4630 logs.go:123] Gathering logs for Docker ...
	I0803 18:07:29.849368    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:07:29.871626    4630 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:07:29.871632    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:07:29.906573    4630 logs.go:123] Gathering logs for kube-apiserver [db60aaba5af7] ...
	I0803 18:07:29.906583    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db60aaba5af7"
	I0803 18:07:29.946492    4630 logs.go:123] Gathering logs for etcd [09bbda970489] ...
	I0803 18:07:29.946505    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09bbda970489"
	I0803 18:07:29.960639    4630 logs.go:123] Gathering logs for storage-provisioner [1431b39e7f18] ...
	I0803 18:07:29.960650    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1431b39e7f18"
	I0803 18:07:29.971814    4630 logs.go:123] Gathering logs for container status ...
	I0803 18:07:29.971828    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:07:29.983474    4630 logs.go:123] Gathering logs for dmesg ...
	I0803 18:07:29.983485    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:07:29.987414    4630 logs.go:123] Gathering logs for kube-apiserver [81ed1708fd6b] ...
	I0803 18:07:29.987422    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ed1708fd6b"
	I0803 18:07:30.002624    4630 logs.go:123] Gathering logs for kube-scheduler [135bcc7cf850] ...
	I0803 18:07:30.002634    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 135bcc7cf850"
	I0803 18:07:32.518901    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:07:32.719471    4477 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:07:37.521215    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:07:37.521561    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:07:37.553926    4630 logs.go:276] 2 containers: [81ed1708fd6b db60aaba5af7]
	I0803 18:07:37.554060    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:07:37.573936    4630 logs.go:276] 2 containers: [09bbda970489 eaff7d840b96]
	I0803 18:07:37.574037    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:07:37.587842    4630 logs.go:276] 1 containers: [5f215627d79c]
	I0803 18:07:37.587939    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:07:37.600149    4630 logs.go:276] 2 containers: [135bcc7cf850 d4fb7551ff98]
	I0803 18:07:37.600224    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:07:37.610989    4630 logs.go:276] 1 containers: [6d2574ad3d0f]
	I0803 18:07:37.611063    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:07:37.625265    4630 logs.go:276] 2 containers: [718f7dff79a6 a41ac171ebac]
	I0803 18:07:37.625339    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:07:37.640317    4630 logs.go:276] 0 containers: []
	W0803 18:07:37.640328    4630 logs.go:278] No container was found matching "kindnet"
	I0803 18:07:37.640388    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:07:37.654153    4630 logs.go:276] 2 containers: [1431b39e7f18 1c409466f72c]
	I0803 18:07:37.654172    4630 logs.go:123] Gathering logs for kubelet ...
	I0803 18:07:37.654178    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 18:07:37.693851    4630 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:07:37.693861    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:07:37.734264    4630 logs.go:123] Gathering logs for etcd [09bbda970489] ...
	I0803 18:07:37.734284    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09bbda970489"
	I0803 18:07:37.749488    4630 logs.go:123] Gathering logs for kube-scheduler [d4fb7551ff98] ...
	I0803 18:07:37.749501    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4fb7551ff98"
	I0803 18:07:37.765788    4630 logs.go:123] Gathering logs for kube-apiserver [db60aaba5af7] ...
	I0803 18:07:37.765801    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db60aaba5af7"
	I0803 18:07:37.809446    4630 logs.go:123] Gathering logs for kube-proxy [6d2574ad3d0f] ...
	I0803 18:07:37.809460    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2574ad3d0f"
	I0803 18:07:37.822585    4630 logs.go:123] Gathering logs for storage-provisioner [1c409466f72c] ...
	I0803 18:07:37.822596    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c409466f72c"
	I0803 18:07:37.834503    4630 logs.go:123] Gathering logs for container status ...
	I0803 18:07:37.834515    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:07:37.848039    4630 logs.go:123] Gathering logs for kube-apiserver [81ed1708fd6b] ...
	I0803 18:07:37.848050    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ed1708fd6b"
	I0803 18:07:37.862460    4630 logs.go:123] Gathering logs for coredns [5f215627d79c] ...
	I0803 18:07:37.862470    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f215627d79c"
	I0803 18:07:37.877214    4630 logs.go:123] Gathering logs for kube-scheduler [135bcc7cf850] ...
	I0803 18:07:37.877225    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 135bcc7cf850"
	I0803 18:07:37.892657    4630 logs.go:123] Gathering logs for kube-controller-manager [a41ac171ebac] ...
	I0803 18:07:37.892668    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41ac171ebac"
	I0803 18:07:37.905718    4630 logs.go:123] Gathering logs for dmesg ...
	I0803 18:07:37.905731    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:07:37.911122    4630 logs.go:123] Gathering logs for etcd [eaff7d840b96] ...
	I0803 18:07:37.911129    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaff7d840b96"
	I0803 18:07:37.926147    4630 logs.go:123] Gathering logs for kube-controller-manager [718f7dff79a6] ...
	I0803 18:07:37.926161    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 718f7dff79a6"
	I0803 18:07:37.944960    4630 logs.go:123] Gathering logs for storage-provisioner [1431b39e7f18] ...
	I0803 18:07:37.944975    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1431b39e7f18"
	I0803 18:07:37.961778    4630 logs.go:123] Gathering logs for Docker ...
	I0803 18:07:37.961788    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:07:37.721527    4477 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:07:37.721654    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:07:37.737700    4477 logs.go:276] 1 containers: [61b3a63eaddc]
	I0803 18:07:37.737773    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:07:37.754450    4477 logs.go:276] 1 containers: [b561e504a901]
	I0803 18:07:37.754528    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:07:37.770166    4477 logs.go:276] 4 containers: [03f5f9344fc4 ac564f34c2d8 bcbb40889ca3 3d1437e6d6fc]
	I0803 18:07:37.770242    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:07:37.786001    4477 logs.go:276] 1 containers: [1d5103a7d136]
	I0803 18:07:37.786082    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:07:37.796768    4477 logs.go:276] 1 containers: [a422dda53c75]
	I0803 18:07:37.796831    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:07:37.807787    4477 logs.go:276] 1 containers: [01a42d8e56a8]
	I0803 18:07:37.807854    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:07:37.818808    4477 logs.go:276] 0 containers: []
	W0803 18:07:37.818819    4477 logs.go:278] No container was found matching "kindnet"
	I0803 18:07:37.818880    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:07:37.830214    4477 logs.go:276] 1 containers: [3318cf46c892]
	I0803 18:07:37.830233    4477 logs.go:123] Gathering logs for coredns [ac564f34c2d8] ...
	I0803 18:07:37.830237    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac564f34c2d8"
	I0803 18:07:37.845045    4477 logs.go:123] Gathering logs for coredns [3d1437e6d6fc] ...
	I0803 18:07:37.845059    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d1437e6d6fc"
	I0803 18:07:37.857813    4477 logs.go:123] Gathering logs for etcd [b561e504a901] ...
	I0803 18:07:37.857826    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b561e504a901"
	I0803 18:07:37.872549    4477 logs.go:123] Gathering logs for kubelet ...
	I0803 18:07:37.872562    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0803 18:07:37.895090    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:07:37.895185    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:07:37.911589    4477 logs.go:123] Gathering logs for coredns [03f5f9344fc4] ...
	I0803 18:07:37.911603    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03f5f9344fc4"
	I0803 18:07:37.924320    4477 logs.go:123] Gathering logs for kube-proxy [a422dda53c75] ...
	I0803 18:07:37.924331    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a422dda53c75"
	I0803 18:07:37.937597    4477 logs.go:123] Gathering logs for Docker ...
	I0803 18:07:37.937609    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:07:37.962958    4477 logs.go:123] Gathering logs for container status ...
	I0803 18:07:37.962967    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:07:37.975554    4477 logs.go:123] Gathering logs for dmesg ...
	I0803 18:07:37.975570    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:07:37.980209    4477 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:07:37.980216    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:07:38.016074    4477 logs.go:123] Gathering logs for kube-apiserver [61b3a63eaddc] ...
	I0803 18:07:38.016087    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61b3a63eaddc"
	I0803 18:07:38.031750    4477 logs.go:123] Gathering logs for coredns [bcbb40889ca3] ...
	I0803 18:07:38.031762    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcbb40889ca3"
	I0803 18:07:38.048129    4477 logs.go:123] Gathering logs for kube-scheduler [1d5103a7d136] ...
	I0803 18:07:38.048143    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d5103a7d136"
	I0803 18:07:38.062988    4477 logs.go:123] Gathering logs for kube-controller-manager [01a42d8e56a8] ...
	I0803 18:07:38.063000    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01a42d8e56a8"
	I0803 18:07:38.082428    4477 logs.go:123] Gathering logs for storage-provisioner [3318cf46c892] ...
	I0803 18:07:38.082439    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3318cf46c892"
	I0803 18:07:38.094773    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:07:38.094785    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0803 18:07:38.094813    4477 out.go:239] X Problems detected in kubelet:
	W0803 18:07:38.094817    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:07:38.094844    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:07:38.094848    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:07:38.094851    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:07:40.487321    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:07:45.489573    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:07:45.489789    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:07:45.505719    4630 logs.go:276] 2 containers: [81ed1708fd6b db60aaba5af7]
	I0803 18:07:45.505794    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:07:45.518464    4630 logs.go:276] 2 containers: [09bbda970489 eaff7d840b96]
	I0803 18:07:45.518537    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:07:45.529281    4630 logs.go:276] 1 containers: [5f215627d79c]
	I0803 18:07:45.529349    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:07:45.539865    4630 logs.go:276] 2 containers: [135bcc7cf850 d4fb7551ff98]
	I0803 18:07:45.539935    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:07:45.550761    4630 logs.go:276] 1 containers: [6d2574ad3d0f]
	I0803 18:07:45.550833    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:07:45.568170    4630 logs.go:276] 2 containers: [718f7dff79a6 a41ac171ebac]
	I0803 18:07:45.568239    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:07:45.579486    4630 logs.go:276] 0 containers: []
	W0803 18:07:45.579495    4630 logs.go:278] No container was found matching "kindnet"
	I0803 18:07:45.579551    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:07:45.590487    4630 logs.go:276] 2 containers: [1431b39e7f18 1c409466f72c]
	I0803 18:07:45.590506    4630 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:07:45.590512    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:07:45.625712    4630 logs.go:123] Gathering logs for kube-apiserver [81ed1708fd6b] ...
	I0803 18:07:45.625724    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ed1708fd6b"
	I0803 18:07:45.640291    4630 logs.go:123] Gathering logs for kube-apiserver [db60aaba5af7] ...
	I0803 18:07:45.640302    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db60aaba5af7"
	I0803 18:07:45.679200    4630 logs.go:123] Gathering logs for etcd [eaff7d840b96] ...
	I0803 18:07:45.679211    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaff7d840b96"
	I0803 18:07:45.695536    4630 logs.go:123] Gathering logs for kube-scheduler [135bcc7cf850] ...
	I0803 18:07:45.695546    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 135bcc7cf850"
	I0803 18:07:45.723670    4630 logs.go:123] Gathering logs for kube-scheduler [d4fb7551ff98] ...
	I0803 18:07:45.723686    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4fb7551ff98"
	I0803 18:07:45.750428    4630 logs.go:123] Gathering logs for kubelet ...
	I0803 18:07:45.750439    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 18:07:45.788319    4630 logs.go:123] Gathering logs for dmesg ...
	I0803 18:07:45.788327    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:07:45.792521    4630 logs.go:123] Gathering logs for kube-controller-manager [a41ac171ebac] ...
	I0803 18:07:45.792531    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41ac171ebac"
	I0803 18:07:45.805067    4630 logs.go:123] Gathering logs for etcd [09bbda970489] ...
	I0803 18:07:45.805077    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09bbda970489"
	I0803 18:07:45.818949    4630 logs.go:123] Gathering logs for coredns [5f215627d79c] ...
	I0803 18:07:45.818959    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f215627d79c"
	I0803 18:07:45.830478    4630 logs.go:123] Gathering logs for kube-controller-manager [718f7dff79a6] ...
	I0803 18:07:45.830489    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 718f7dff79a6"
	I0803 18:07:45.848464    4630 logs.go:123] Gathering logs for storage-provisioner [1431b39e7f18] ...
	I0803 18:07:45.848475    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1431b39e7f18"
	I0803 18:07:45.860500    4630 logs.go:123] Gathering logs for kube-proxy [6d2574ad3d0f] ...
	I0803 18:07:45.860509    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2574ad3d0f"
	I0803 18:07:45.871859    4630 logs.go:123] Gathering logs for storage-provisioner [1c409466f72c] ...
	I0803 18:07:45.871870    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c409466f72c"
	I0803 18:07:45.882537    4630 logs.go:123] Gathering logs for Docker ...
	I0803 18:07:45.882548    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:07:45.904912    4630 logs.go:123] Gathering logs for container status ...
	I0803 18:07:45.904920    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:07:48.098723    4477 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:07:48.418774    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:07:53.100941    4477 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:07:53.101103    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:07:53.112192    4477 logs.go:276] 1 containers: [61b3a63eaddc]
	I0803 18:07:53.112277    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:07:53.123503    4477 logs.go:276] 1 containers: [b561e504a901]
	I0803 18:07:53.123575    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:07:53.134104    4477 logs.go:276] 4 containers: [03f5f9344fc4 ac564f34c2d8 bcbb40889ca3 3d1437e6d6fc]
	I0803 18:07:53.134180    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:07:53.145742    4477 logs.go:276] 1 containers: [1d5103a7d136]
	I0803 18:07:53.145810    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:07:53.156029    4477 logs.go:276] 1 containers: [a422dda53c75]
	I0803 18:07:53.156103    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:07:53.166398    4477 logs.go:276] 1 containers: [01a42d8e56a8]
	I0803 18:07:53.166461    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:07:53.176485    4477 logs.go:276] 0 containers: []
	W0803 18:07:53.176497    4477 logs.go:278] No container was found matching "kindnet"
	I0803 18:07:53.176559    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:07:53.187269    4477 logs.go:276] 1 containers: [3318cf46c892]
	I0803 18:07:53.187285    4477 logs.go:123] Gathering logs for Docker ...
	I0803 18:07:53.187292    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:07:53.210661    4477 logs.go:123] Gathering logs for coredns [03f5f9344fc4] ...
	I0803 18:07:53.210667    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03f5f9344fc4"
	I0803 18:07:53.222178    4477 logs.go:123] Gathering logs for coredns [3d1437e6d6fc] ...
	I0803 18:07:53.222190    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d1437e6d6fc"
	I0803 18:07:53.238222    4477 logs.go:123] Gathering logs for kube-apiserver [61b3a63eaddc] ...
	I0803 18:07:53.238233    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61b3a63eaddc"
	I0803 18:07:53.252491    4477 logs.go:123] Gathering logs for container status ...
	I0803 18:07:53.252501    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:07:53.264486    4477 logs.go:123] Gathering logs for coredns [ac564f34c2d8] ...
	I0803 18:07:53.264497    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac564f34c2d8"
	I0803 18:07:53.276442    4477 logs.go:123] Gathering logs for coredns [bcbb40889ca3] ...
	I0803 18:07:53.276454    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcbb40889ca3"
	I0803 18:07:53.287685    4477 logs.go:123] Gathering logs for kube-scheduler [1d5103a7d136] ...
	I0803 18:07:53.287697    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d5103a7d136"
	I0803 18:07:53.302786    4477 logs.go:123] Gathering logs for storage-provisioner [3318cf46c892] ...
	I0803 18:07:53.302799    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3318cf46c892"
	I0803 18:07:53.314438    4477 logs.go:123] Gathering logs for kubelet ...
	I0803 18:07:53.314451    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0803 18:07:53.332068    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:07:53.332159    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:07:53.347787    4477 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:07:53.347794    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:07:53.382604    4477 logs.go:123] Gathering logs for kube-proxy [a422dda53c75] ...
	I0803 18:07:53.382614    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a422dda53c75"
	I0803 18:07:53.394996    4477 logs.go:123] Gathering logs for kube-controller-manager [01a42d8e56a8] ...
	I0803 18:07:53.395010    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01a42d8e56a8"
	I0803 18:07:53.418248    4477 logs.go:123] Gathering logs for dmesg ...
	I0803 18:07:53.418263    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:07:53.423398    4477 logs.go:123] Gathering logs for etcd [b561e504a901] ...
	I0803 18:07:53.423407    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b561e504a901"
	I0803 18:07:53.442780    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:07:53.442792    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0803 18:07:53.442819    4477 out.go:239] X Problems detected in kubelet:
	W0803 18:07:53.442825    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:07:53.442830    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:07:53.442842    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:07:53.442845    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:07:53.419591    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:07:53.419719    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:07:53.431250    4630 logs.go:276] 2 containers: [81ed1708fd6b db60aaba5af7]
	I0803 18:07:53.431322    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:07:53.443841    4630 logs.go:276] 2 containers: [09bbda970489 eaff7d840b96]
	I0803 18:07:53.443904    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:07:53.457468    4630 logs.go:276] 1 containers: [5f215627d79c]
	I0803 18:07:53.457540    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:07:53.468260    4630 logs.go:276] 2 containers: [135bcc7cf850 d4fb7551ff98]
	I0803 18:07:53.468331    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:07:53.479146    4630 logs.go:276] 1 containers: [6d2574ad3d0f]
	I0803 18:07:53.479211    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:07:53.489871    4630 logs.go:276] 2 containers: [718f7dff79a6 a41ac171ebac]
	I0803 18:07:53.489950    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:07:53.500759    4630 logs.go:276] 0 containers: []
	W0803 18:07:53.500769    4630 logs.go:278] No container was found matching "kindnet"
	I0803 18:07:53.500830    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:07:53.511413    4630 logs.go:276] 2 containers: [1431b39e7f18 1c409466f72c]
	I0803 18:07:53.511433    4630 logs.go:123] Gathering logs for kube-apiserver [db60aaba5af7] ...
	I0803 18:07:53.511441    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db60aaba5af7"
	I0803 18:07:53.550011    4630 logs.go:123] Gathering logs for kube-proxy [6d2574ad3d0f] ...
	I0803 18:07:53.550021    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2574ad3d0f"
	I0803 18:07:53.562208    4630 logs.go:123] Gathering logs for container status ...
	I0803 18:07:53.562219    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:07:53.577711    4630 logs.go:123] Gathering logs for dmesg ...
	I0803 18:07:53.577724    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:07:53.582217    4630 logs.go:123] Gathering logs for kube-apiserver [81ed1708fd6b] ...
	I0803 18:07:53.582224    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ed1708fd6b"
	I0803 18:07:53.596779    4630 logs.go:123] Gathering logs for Docker ...
	I0803 18:07:53.596789    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:07:53.618341    4630 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:07:53.618348    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:07:53.654730    4630 logs.go:123] Gathering logs for storage-provisioner [1c409466f72c] ...
	I0803 18:07:53.654741    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c409466f72c"
	I0803 18:07:53.666648    4630 logs.go:123] Gathering logs for kube-scheduler [135bcc7cf850] ...
	I0803 18:07:53.666658    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 135bcc7cf850"
	I0803 18:07:53.680588    4630 logs.go:123] Gathering logs for kube-scheduler [d4fb7551ff98] ...
	I0803 18:07:53.680598    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4fb7551ff98"
	I0803 18:07:53.695272    4630 logs.go:123] Gathering logs for storage-provisioner [1431b39e7f18] ...
	I0803 18:07:53.695281    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1431b39e7f18"
	I0803 18:07:53.706901    4630 logs.go:123] Gathering logs for kubelet ...
	I0803 18:07:53.706915    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 18:07:53.747002    4630 logs.go:123] Gathering logs for etcd [eaff7d840b96] ...
	I0803 18:07:53.747014    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaff7d840b96"
	I0803 18:07:53.761923    4630 logs.go:123] Gathering logs for kube-controller-manager [718f7dff79a6] ...
	I0803 18:07:53.761934    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 718f7dff79a6"
	I0803 18:07:53.779792    4630 logs.go:123] Gathering logs for kube-controller-manager [a41ac171ebac] ...
	I0803 18:07:53.779803    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41ac171ebac"
	I0803 18:07:53.792413    4630 logs.go:123] Gathering logs for etcd [09bbda970489] ...
	I0803 18:07:53.792422    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09bbda970489"
	I0803 18:07:53.806035    4630 logs.go:123] Gathering logs for coredns [5f215627d79c] ...
	I0803 18:07:53.806045    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f215627d79c"
	I0803 18:07:56.319771    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:08:01.322010    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:08:01.322086    4630 kubeadm.go:597] duration metric: took 4m4.117150917s to restartPrimaryControlPlane
	W0803 18:08:01.322180    4630 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0803 18:08:01.322218    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0803 18:08:02.338583    4630 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.016381792s)
	I0803 18:08:02.338657    4630 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 18:08:02.343631    4630 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0803 18:08:02.346668    4630 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0803 18:08:02.349453    4630 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0803 18:08:02.349458    4630 kubeadm.go:157] found existing configuration files:
	
	I0803 18:08:02.349484    4630 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50497 /etc/kubernetes/admin.conf
	I0803 18:08:02.351874    4630 kubeadm.go:163] "https://control-plane.minikube.internal:50497" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50497 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0803 18:08:02.351895    4630 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0803 18:08:02.354757    4630 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50497 /etc/kubernetes/kubelet.conf
	I0803 18:08:02.357619    4630 kubeadm.go:163] "https://control-plane.minikube.internal:50497" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50497 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0803 18:08:02.357639    4630 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0803 18:08:02.360417    4630 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50497 /etc/kubernetes/controller-manager.conf
	I0803 18:08:02.362925    4630 kubeadm.go:163] "https://control-plane.minikube.internal:50497" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50497 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0803 18:08:02.362944    4630 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0803 18:08:02.365826    4630 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50497 /etc/kubernetes/scheduler.conf
	I0803 18:08:02.368469    4630 kubeadm.go:163] "https://control-plane.minikube.internal:50497" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50497 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0803 18:08:02.368490    4630 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0803 18:08:02.371112    4630 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0803 18:08:02.389679    4630 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0803 18:08:02.389705    4630 kubeadm.go:310] [preflight] Running pre-flight checks
	I0803 18:08:02.438313    4630 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0803 18:08:02.438364    4630 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0803 18:08:02.438407    4630 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0803 18:08:02.490315    4630 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0803 18:08:02.494549    4630 out.go:204]   - Generating certificates and keys ...
	I0803 18:08:02.494590    4630 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0803 18:08:02.494626    4630 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0803 18:08:02.494663    4630 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0803 18:08:02.494697    4630 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0803 18:08:02.494751    4630 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0803 18:08:02.494786    4630 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0803 18:08:02.494820    4630 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0803 18:08:02.494850    4630 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0803 18:08:02.494889    4630 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0803 18:08:02.494930    4630 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0803 18:08:02.494957    4630 kubeadm.go:310] [certs] Using the existing "sa" key
	I0803 18:08:02.494987    4630 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0803 18:08:02.539697    4630 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0803 18:08:02.597198    4630 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0803 18:08:02.733869    4630 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0803 18:08:02.834327    4630 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0803 18:08:02.866213    4630 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0803 18:08:02.866669    4630 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0803 18:08:02.866690    4630 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0803 18:08:02.952409    4630 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0803 18:08:02.955576    4630 out.go:204]   - Booting up control plane ...
	I0803 18:08:02.955625    4630 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0803 18:08:02.955663    4630 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0803 18:08:02.955705    4630 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0803 18:08:02.955750    4630 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0803 18:08:02.955848    4630 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0803 18:08:03.445253    4477 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:08:06.956623    4630 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.001337 seconds
	I0803 18:08:06.956688    4630 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0803 18:08:06.962007    4630 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0803 18:08:07.471470    4630 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0803 18:08:07.471644    4630 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-413000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0803 18:08:07.974712    4630 kubeadm.go:310] [bootstrap-token] Using token: ns3qrc.zgs4s8hhalx61p06
	I0803 18:08:07.978119    4630 out.go:204]   - Configuring RBAC rules ...
	I0803 18:08:07.978190    4630 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0803 18:08:07.978238    4630 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0803 18:08:07.980116    4630 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0803 18:08:07.984535    4630 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0803 18:08:07.985464    4630 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0803 18:08:07.986307    4630 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0803 18:08:07.989269    4630 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0803 18:08:08.162942    4630 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0803 18:08:08.378344    4630 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0803 18:08:08.378791    4630 kubeadm.go:310] 
	I0803 18:08:08.378819    4630 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0803 18:08:08.378824    4630 kubeadm.go:310] 
	I0803 18:08:08.378860    4630 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0803 18:08:08.378865    4630 kubeadm.go:310] 
	I0803 18:08:08.378876    4630 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0803 18:08:08.378915    4630 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0803 18:08:08.378941    4630 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0803 18:08:08.378945    4630 kubeadm.go:310] 
	I0803 18:08:08.378973    4630 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0803 18:08:08.378978    4630 kubeadm.go:310] 
	I0803 18:08:08.379006    4630 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0803 18:08:08.379011    4630 kubeadm.go:310] 
	I0803 18:08:08.379046    4630 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0803 18:08:08.379087    4630 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0803 18:08:08.379126    4630 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0803 18:08:08.379131    4630 kubeadm.go:310] 
	I0803 18:08:08.379174    4630 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0803 18:08:08.379216    4630 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0803 18:08:08.379220    4630 kubeadm.go:310] 
	I0803 18:08:08.379263    4630 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ns3qrc.zgs4s8hhalx61p06 \
	I0803 18:08:08.379317    4630 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:8926886cd496fcdb8fb5b92a5ce19b9a5533dd397e42f479b7664c72b739cada \
	I0803 18:08:08.379330    4630 kubeadm.go:310] 	--control-plane 
	I0803 18:08:08.379334    4630 kubeadm.go:310] 
	I0803 18:08:08.379375    4630 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0803 18:08:08.379379    4630 kubeadm.go:310] 
	I0803 18:08:08.379426    4630 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ns3qrc.zgs4s8hhalx61p06 \
	I0803 18:08:08.379478    4630 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:8926886cd496fcdb8fb5b92a5ce19b9a5533dd397e42f479b7664c72b739cada 
	I0803 18:08:08.379669    4630 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0803 18:08:08.379716    4630 cni.go:84] Creating CNI manager for ""
	I0803 18:08:08.379727    4630 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0803 18:08:08.384527    4630 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0803 18:08:08.388453    4630 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0803 18:08:08.391894    4630 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0803 18:08:08.396394    4630 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0803 18:08:08.396452    4630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 18:08:08.396456    4630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-413000 minikube.k8s.io/updated_at=2024_08_03T18_08_08_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=40edfa2c305b426121bc436e9f593c10662235e6 minikube.k8s.io/name=stopped-upgrade-413000 minikube.k8s.io/primary=true
	I0803 18:08:08.432248    4630 kubeadm.go:1113] duration metric: took 35.834291ms to wait for elevateKubeSystemPrivileges
	I0803 18:08:08.439824    4630 ops.go:34] apiserver oom_adj: -16
	I0803 18:08:08.439838    4630 kubeadm.go:394] duration metric: took 4m11.249546291s to StartCluster
	I0803 18:08:08.439850    4630 settings.go:142] acquiring lock: {Name:mkc455f89a0a1d96857baea22a1ca4141ab02c5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 18:08:08.439953    4630 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19364-1166/kubeconfig
	I0803 18:08:08.440388    4630 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19364-1166/kubeconfig: {Name:mk0a3c55e1982b2d92db1034b47f8334d27942c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 18:08:08.440586    4630 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 18:08:08.440687    4630 config.go:182] Loaded profile config "stopped-upgrade-413000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0803 18:08:08.440644    4630 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0803 18:08:08.440734    4630 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-413000"
	I0803 18:08:08.440740    4630 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-413000"
	I0803 18:08:08.440748    4630 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-413000"
	W0803 18:08:08.440752    4630 addons.go:243] addon storage-provisioner should already be in state true
	I0803 18:08:08.440753    4630 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-413000"
	I0803 18:08:08.440763    4630 host.go:66] Checking if "stopped-upgrade-413000" exists ...
	I0803 18:08:08.441181    4630 retry.go:31] will retry after 745.250721ms: connect: dial unix /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/stopped-upgrade-413000/monitor: connect: connection refused
	I0803 18:08:08.444459    4630 out.go:177] * Verifying Kubernetes components...
	I0803 18:08:08.454421    4630 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0803 18:08:08.445518    4477 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:08:08.445587    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:08:08.456657    4477 logs.go:276] 1 containers: [61b3a63eaddc]
	I0803 18:08:08.456724    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:08:08.467476    4477 logs.go:276] 1 containers: [b561e504a901]
	I0803 18:08:08.467543    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:08:08.478739    4477 logs.go:276] 4 containers: [03f5f9344fc4 ac564f34c2d8 bcbb40889ca3 3d1437e6d6fc]
	I0803 18:08:08.478809    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:08:08.493487    4477 logs.go:276] 1 containers: [1d5103a7d136]
	I0803 18:08:08.493561    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:08:08.505562    4477 logs.go:276] 1 containers: [a422dda53c75]
	I0803 18:08:08.505630    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:08:08.516989    4477 logs.go:276] 1 containers: [01a42d8e56a8]
	I0803 18:08:08.517060    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:08:08.527922    4477 logs.go:276] 0 containers: []
	W0803 18:08:08.527934    4477 logs.go:278] No container was found matching "kindnet"
	I0803 18:08:08.527992    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:08:08.543035    4477 logs.go:276] 1 containers: [3318cf46c892]
	I0803 18:08:08.543068    4477 logs.go:123] Gathering logs for coredns [ac564f34c2d8] ...
	I0803 18:08:08.543074    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac564f34c2d8"
	I0803 18:08:08.556768    4477 logs.go:123] Gathering logs for kubelet ...
	I0803 18:08:08.556780    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0803 18:08:08.573932    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:08:08.574025    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:08:08.590288    4477 logs.go:123] Gathering logs for dmesg ...
	I0803 18:08:08.590304    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:08:08.595099    4477 logs.go:123] Gathering logs for kube-apiserver [61b3a63eaddc] ...
	I0803 18:08:08.595106    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61b3a63eaddc"
	I0803 18:08:08.609954    4477 logs.go:123] Gathering logs for etcd [b561e504a901] ...
	I0803 18:08:08.609965    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b561e504a901"
	I0803 18:08:08.625453    4477 logs.go:123] Gathering logs for kube-scheduler [1d5103a7d136] ...
	I0803 18:08:08.625468    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d5103a7d136"
	I0803 18:08:08.643573    4477 logs.go:123] Gathering logs for storage-provisioner [3318cf46c892] ...
	I0803 18:08:08.643587    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3318cf46c892"
	I0803 18:08:08.657032    4477 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:08:08.657042    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:08:08.696450    4477 logs.go:123] Gathering logs for coredns [bcbb40889ca3] ...
	I0803 18:08:08.696462    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcbb40889ca3"
	I0803 18:08:08.708850    4477 logs.go:123] Gathering logs for coredns [3d1437e6d6fc] ...
	I0803 18:08:08.708863    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d1437e6d6fc"
	I0803 18:08:08.722484    4477 logs.go:123] Gathering logs for container status ...
	I0803 18:08:08.722498    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:08:08.734924    4477 logs.go:123] Gathering logs for coredns [03f5f9344fc4] ...
	I0803 18:08:08.734936    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03f5f9344fc4"
	I0803 18:08:08.748121    4477 logs.go:123] Gathering logs for kube-proxy [a422dda53c75] ...
	I0803 18:08:08.748132    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a422dda53c75"
	I0803 18:08:08.760684    4477 logs.go:123] Gathering logs for kube-controller-manager [01a42d8e56a8] ...
	I0803 18:08:08.760697    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01a42d8e56a8"
	I0803 18:08:08.783376    4477 logs.go:123] Gathering logs for Docker ...
	I0803 18:08:08.783391    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:08:08.816644    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:08:08.816667    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0803 18:08:08.816709    4477 out.go:239] X Problems detected in kubelet:
	W0803 18:08:08.816714    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:08:08.816718    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:08:08.816722    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:08:08.816725    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:08:08.460398    4630 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 18:08:08.464471    4630 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0803 18:08:08.464481    4630 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0803 18:08:08.464490    4630 sshutil.go:53] new ssh client: &{IP:localhost Port:50464 SSHKeyPath:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/stopped-upgrade-413000/id_rsa Username:docker}
	I0803 18:08:08.546453    4630 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0803 18:08:08.552295    4630 api_server.go:52] waiting for apiserver process to appear ...
	I0803 18:08:08.552340    4630 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 18:08:08.556974    4630 api_server.go:72] duration metric: took 116.379333ms to wait for apiserver process to appear ...
	I0803 18:08:08.556984    4630 api_server.go:88] waiting for apiserver healthz status ...
	I0803 18:08:08.556993    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:08:08.604308    4630 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0803 18:08:09.189511    4630 kapi.go:59] client config for stopped-upgrade-413000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/stopped-upgrade-413000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/stopped-upgrade-413000/client.key", CAFile:"/Users/jenkins/minikube-integration/19364-1166/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1019a01b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0803 18:08:09.189640    4630 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-413000"
	W0803 18:08:09.189647    4630 addons.go:243] addon default-storageclass should already be in state true
	I0803 18:08:09.189660    4630 host.go:66] Checking if "stopped-upgrade-413000" exists ...
	I0803 18:08:09.190391    4630 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0803 18:08:09.190398    4630 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0803 18:08:09.190405    4630 sshutil.go:53] new ssh client: &{IP:localhost Port:50464 SSHKeyPath:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/stopped-upgrade-413000/id_rsa Username:docker}
	I0803 18:08:09.225769    4630 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0803 18:08:13.558938    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:08:13.558983    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:08:18.819504    4477 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:08:18.559181    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:08:18.559218    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:08:23.821631    4477 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:08:23.821806    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:08:23.839192    4477 logs.go:276] 1 containers: [61b3a63eaddc]
	I0803 18:08:23.839279    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:08:23.851930    4477 logs.go:276] 1 containers: [b561e504a901]
	I0803 18:08:23.852004    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:08:23.864049    4477 logs.go:276] 4 containers: [03f5f9344fc4 ac564f34c2d8 bcbb40889ca3 3d1437e6d6fc]
	I0803 18:08:23.864122    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:08:23.875384    4477 logs.go:276] 1 containers: [1d5103a7d136]
	I0803 18:08:23.875449    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:08:23.886121    4477 logs.go:276] 1 containers: [a422dda53c75]
	I0803 18:08:23.886189    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:08:23.897260    4477 logs.go:276] 1 containers: [01a42d8e56a8]
	I0803 18:08:23.897325    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:08:23.907520    4477 logs.go:276] 0 containers: []
	W0803 18:08:23.907530    4477 logs.go:278] No container was found matching "kindnet"
	I0803 18:08:23.907585    4477 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:08:23.917863    4477 logs.go:276] 1 containers: [3318cf46c892]
	I0803 18:08:23.917879    4477 logs.go:123] Gathering logs for container status ...
	I0803 18:08:23.917885    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:08:23.930053    4477 logs.go:123] Gathering logs for dmesg ...
	I0803 18:08:23.930066    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:08:23.935055    4477 logs.go:123] Gathering logs for etcd [b561e504a901] ...
	I0803 18:08:23.935061    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b561e504a901"
	I0803 18:08:23.949035    4477 logs.go:123] Gathering logs for coredns [03f5f9344fc4] ...
	I0803 18:08:23.949045    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03f5f9344fc4"
	I0803 18:08:23.965171    4477 logs.go:123] Gathering logs for coredns [ac564f34c2d8] ...
	I0803 18:08:23.965181    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac564f34c2d8"
	I0803 18:08:23.976970    4477 logs.go:123] Gathering logs for kube-proxy [a422dda53c75] ...
	I0803 18:08:23.976983    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a422dda53c75"
	I0803 18:08:23.989051    4477 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:08:23.989064    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:08:24.036280    4477 logs.go:123] Gathering logs for kube-apiserver [61b3a63eaddc] ...
	I0803 18:08:24.036296    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61b3a63eaddc"
	I0803 18:08:24.060243    4477 logs.go:123] Gathering logs for coredns [3d1437e6d6fc] ...
	I0803 18:08:24.060255    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d1437e6d6fc"
	I0803 18:08:24.072762    4477 logs.go:123] Gathering logs for kube-controller-manager [01a42d8e56a8] ...
	I0803 18:08:24.072774    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01a42d8e56a8"
	I0803 18:08:24.090431    4477 logs.go:123] Gathering logs for kubelet ...
	I0803 18:08:24.090444    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0803 18:08:24.108917    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:08:24.109008    4477 logs.go:138] Found kubelet problem: Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:08:24.124666    4477 logs.go:123] Gathering logs for kube-scheduler [1d5103a7d136] ...
	I0803 18:08:24.124671    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d5103a7d136"
	I0803 18:08:24.139411    4477 logs.go:123] Gathering logs for coredns [bcbb40889ca3] ...
	I0803 18:08:24.139426    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcbb40889ca3"
	I0803 18:08:24.151542    4477 logs.go:123] Gathering logs for storage-provisioner [3318cf46c892] ...
	I0803 18:08:24.151555    4477 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3318cf46c892"
	I0803 18:08:24.162774    4477 logs.go:123] Gathering logs for Docker ...
	I0803 18:08:24.162787    4477 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:08:24.186806    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:08:24.186815    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0803 18:08:24.186838    4477 out.go:239] X Problems detected in kubelet:
	W0803 18:08:24.186842    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: W0804 01:00:40.842663    3545 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	W0803 18:08:24.186845    4477 out.go:239]   Aug 04 01:00:40 running-upgrade-359000 kubelet[3545]: E0804 01:00:40.842676    3545 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-359000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-359000' and this object
	I0803 18:08:24.186849    4477 out.go:304] Setting ErrFile to fd 2...
	I0803 18:08:24.186863    4477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:08:23.559844    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:08:23.559879    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:08:28.560287    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:08:28.560307    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:08:34.190724    4477 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:08:33.560861    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:08:33.560896    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:08:39.192913    4477 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:08:39.197259    4477 out.go:177] 
	W0803 18:08:39.200287    4477 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0803 18:08:39.200292    4477 out.go:239] * 
	W0803 18:08:39.200749    4477 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 18:08:39.212157    4477 out.go:177] 
	I0803 18:08:38.561687    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:08:38.561720    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0803 18:08:39.290324    4630 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0803 18:08:39.300857    4630 out.go:177] * Enabled addons: storage-provisioner
	I0803 18:08:39.309759    4630 addons.go:510] duration metric: took 30.870012792s for enable addons: enabled=[storage-provisioner]
	I0803 18:08:43.562723    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:08:43.562745    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:08:48.564387    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:08:48.564413    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Sun 2024-08-04 00:59:43 UTC, ends at Sun 2024-08-04 01:08:55 UTC. --
	Aug 04 01:08:36 running-upgrade-359000 dockerd[2855]: time="2024-08-04T01:08:36.331791989Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 04 01:08:36 running-upgrade-359000 dockerd[2855]: time="2024-08-04T01:08:36.331819530Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 04 01:08:36 running-upgrade-359000 dockerd[2855]: time="2024-08-04T01:08:36.331825029Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 04 01:08:36 running-upgrade-359000 dockerd[2855]: time="2024-08-04T01:08:36.331933774Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/2f96c15a37c90ccc58a3848826db8915efc4b57d1266c3425910bb79a65aea42 pid=15511 runtime=io.containerd.runc.v2
	Aug 04 01:08:37 running-upgrade-359000 cri-dockerd[2698]: time="2024-08-04T01:08:37Z" level=error msg="ContainerStats resp: {0x40007edb00 linux}"
	Aug 04 01:08:38 running-upgrade-359000 cri-dockerd[2698]: time="2024-08-04T01:08:38Z" level=error msg="ContainerStats resp: {0x400092b500 linux}"
	Aug 04 01:08:38 running-upgrade-359000 cri-dockerd[2698]: time="2024-08-04T01:08:38Z" level=error msg="ContainerStats resp: {0x40008ff6c0 linux}"
	Aug 04 01:08:38 running-upgrade-359000 cri-dockerd[2698]: time="2024-08-04T01:08:38Z" level=error msg="ContainerStats resp: {0x400090a8c0 linux}"
	Aug 04 01:08:38 running-upgrade-359000 cri-dockerd[2698]: time="2024-08-04T01:08:38Z" level=error msg="ContainerStats resp: {0x400090af40 linux}"
	Aug 04 01:08:38 running-upgrade-359000 cri-dockerd[2698]: time="2024-08-04T01:08:38Z" level=error msg="ContainerStats resp: {0x40008d6100 linux}"
	Aug 04 01:08:38 running-upgrade-359000 cri-dockerd[2698]: time="2024-08-04T01:08:38Z" level=error msg="ContainerStats resp: {0x400090bc80 linux}"
	Aug 04 01:08:38 running-upgrade-359000 cri-dockerd[2698]: time="2024-08-04T01:08:38Z" level=error msg="ContainerStats resp: {0x40008d6a00 linux}"
	Aug 04 01:08:40 running-upgrade-359000 cri-dockerd[2698]: time="2024-08-04T01:08:40Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 04 01:08:45 running-upgrade-359000 cri-dockerd[2698]: time="2024-08-04T01:08:45Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 04 01:08:48 running-upgrade-359000 cri-dockerd[2698]: time="2024-08-04T01:08:48Z" level=error msg="ContainerStats resp: {0x40001158c0 linux}"
	Aug 04 01:08:48 running-upgrade-359000 cri-dockerd[2698]: time="2024-08-04T01:08:48Z" level=error msg="ContainerStats resp: {0x4000550280 linux}"
	Aug 04 01:08:49 running-upgrade-359000 cri-dockerd[2698]: time="2024-08-04T01:08:49Z" level=error msg="ContainerStats resp: {0x4000398a00 linux}"
	Aug 04 01:08:50 running-upgrade-359000 cri-dockerd[2698]: time="2024-08-04T01:08:50Z" level=error msg="ContainerStats resp: {0x4000436a40 linux}"
	Aug 04 01:08:50 running-upgrade-359000 cri-dockerd[2698]: time="2024-08-04T01:08:50Z" level=error msg="ContainerStats resp: {0x4000437340 linux}"
	Aug 04 01:08:50 running-upgrade-359000 cri-dockerd[2698]: time="2024-08-04T01:08:50Z" level=error msg="ContainerStats resp: {0x4000437b80 linux}"
	Aug 04 01:08:50 running-upgrade-359000 cri-dockerd[2698]: time="2024-08-04T01:08:50Z" level=error msg="ContainerStats resp: {0x40008d6100 linux}"
	Aug 04 01:08:50 running-upgrade-359000 cri-dockerd[2698]: time="2024-08-04T01:08:50Z" level=error msg="ContainerStats resp: {0x40008d6240 linux}"
	Aug 04 01:08:50 running-upgrade-359000 cri-dockerd[2698]: time="2024-08-04T01:08:50Z" level=error msg="ContainerStats resp: {0x40008d6580 linux}"
	Aug 04 01:08:50 running-upgrade-359000 cri-dockerd[2698]: time="2024-08-04T01:08:50Z" level=error msg="ContainerStats resp: {0x40000bb340 linux}"
	Aug 04 01:08:50 running-upgrade-359000 cri-dockerd[2698]: time="2024-08-04T01:08:50Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	91e579bae410f       edaa71f2aee88       19 seconds ago      Running             coredns                   2                   837e31721c698
	2f96c15a37c90       edaa71f2aee88       19 seconds ago      Running             coredns                   2                   f11db3aa40008
	03f5f9344fc41       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   837e31721c698
	ac564f34c2d84       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   f11db3aa40008
	a422dda53c75e       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   c5915ee045105
	3318cf46c892a       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   4df70d9e6bd64
	1d5103a7d1369       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   defa5e86574f4
	01a42d8e56a8c       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   ceabd685ca209
	61b3a63eaddc3       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   4b8589ac691fb
	b561e504a9011       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   9abc238873002
	
	
	==> coredns [03f5f9344fc4] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 5783624329338375089.4810271041299495610. HINFO: read udp 10.244.0.2:45137->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5783624329338375089.4810271041299495610. HINFO: read udp 10.244.0.2:57853->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5783624329338375089.4810271041299495610. HINFO: read udp 10.244.0.2:40302->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5783624329338375089.4810271041299495610. HINFO: read udp 10.244.0.2:47160->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5783624329338375089.4810271041299495610. HINFO: read udp 10.244.0.2:54947->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5783624329338375089.4810271041299495610. HINFO: read udp 10.244.0.2:42535->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5783624329338375089.4810271041299495610. HINFO: read udp 10.244.0.2:33263->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5783624329338375089.4810271041299495610. HINFO: read udp 10.244.0.2:41478->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5783624329338375089.4810271041299495610. HINFO: read udp 10.244.0.2:57813->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5783624329338375089.4810271041299495610. HINFO: read udp 10.244.0.2:57766->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [2f96c15a37c9] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 2846768210879176058.7046855341921186224. HINFO: read udp 10.244.0.3:43286->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2846768210879176058.7046855341921186224. HINFO: read udp 10.244.0.3:45225->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2846768210879176058.7046855341921186224. HINFO: read udp 10.244.0.3:35251->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2846768210879176058.7046855341921186224. HINFO: read udp 10.244.0.3:55026->10.0.2.3:53: i/o timeout
	
	
	==> coredns [91e579bae410] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 1950955087734441318.6458498865749787712. HINFO: read udp 10.244.0.2:44215->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1950955087734441318.6458498865749787712. HINFO: read udp 10.244.0.2:47448->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1950955087734441318.6458498865749787712. HINFO: read udp 10.244.0.2:53814->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1950955087734441318.6458498865749787712. HINFO: read udp 10.244.0.2:58051->10.0.2.3:53: i/o timeout
	
	
	==> coredns [ac564f34c2d8] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 8848055215592255444.7571001512643797078. HINFO: read udp 10.244.0.3:45249->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8848055215592255444.7571001512643797078. HINFO: read udp 10.244.0.3:36481->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8848055215592255444.7571001512643797078. HINFO: read udp 10.244.0.3:35350->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8848055215592255444.7571001512643797078. HINFO: read udp 10.244.0.3:36592->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8848055215592255444.7571001512643797078. HINFO: read udp 10.244.0.3:51916->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8848055215592255444.7571001512643797078. HINFO: read udp 10.244.0.3:50902->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8848055215592255444.7571001512643797078. HINFO: read udp 10.244.0.3:50928->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8848055215592255444.7571001512643797078. HINFO: read udp 10.244.0.3:35284->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8848055215592255444.7571001512643797078. HINFO: read udp 10.244.0.3:55668->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8848055215592255444.7571001512643797078. HINFO: read udp 10.244.0.3:52417->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               running-upgrade-359000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-359000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=40edfa2c305b426121bc436e9f593c10662235e6
	                    minikube.k8s.io/name=running-upgrade-359000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_03T18_04_34_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 04 Aug 2024 01:04:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-359000
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 04 Aug 2024 01:08:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 04 Aug 2024 01:04:34 +0000   Sun, 04 Aug 2024 01:04:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 04 Aug 2024 01:04:34 +0000   Sun, 04 Aug 2024 01:04:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 04 Aug 2024 01:04:34 +0000   Sun, 04 Aug 2024 01:04:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 04 Aug 2024 01:04:34 +0000   Sun, 04 Aug 2024 01:04:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-359000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 48d9e00763984d66a58cfbb427cbc96e
	  System UUID:                48d9e00763984d66a58cfbb427cbc96e
	  Boot ID:                    e6380c6e-2582-4a61-82b6-ad0a377ddf34
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-dgxsg                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m7s
	  kube-system                 coredns-6d4b75cb6d-m89fc                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m7s
	  kube-system                 etcd-running-upgrade-359000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m22s
	  kube-system                 kube-apiserver-running-upgrade-359000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m22s
	  kube-system                 kube-controller-manager-running-upgrade-359000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m22s
	  kube-system                 kube-proxy-nl8p2                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m7s
	  kube-system                 kube-scheduler-running-upgrade-359000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m20s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m6s   kube-proxy       
	  Normal  NodeReady                4m21s  kubelet          Node running-upgrade-359000 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m21s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m21s  kubelet          Node running-upgrade-359000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m21s  kubelet          Node running-upgrade-359000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m21s  kubelet          Node running-upgrade-359000 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m21s  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m8s   node-controller  Node running-upgrade-359000 event: Registered Node running-upgrade-359000 in Controller
	
	
	==> dmesg <==
	[  +1.741346] systemd-fstab-generator[879]: Ignoring "noauto" for root device
	[  +0.061785] systemd-fstab-generator[890]: Ignoring "noauto" for root device
	[  +0.066130] systemd-fstab-generator[901]: Ignoring "noauto" for root device
	[  +1.145922] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.086698] systemd-fstab-generator[1049]: Ignoring "noauto" for root device
	[  +0.075596] systemd-fstab-generator[1060]: Ignoring "noauto" for root device
	[  +2.367782] systemd-fstab-generator[1291]: Ignoring "noauto" for root device
	[Aug 4 01:00] systemd-fstab-generator[1935]: Ignoring "noauto" for root device
	[  +2.918480] systemd-fstab-generator[2214]: Ignoring "noauto" for root device
	[  +0.143104] systemd-fstab-generator[2247]: Ignoring "noauto" for root device
	[  +0.087637] systemd-fstab-generator[2258]: Ignoring "noauto" for root device
	[  +0.106850] systemd-fstab-generator[2271]: Ignoring "noauto" for root device
	[  +2.137866] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.108749] systemd-fstab-generator[2655]: Ignoring "noauto" for root device
	[  +0.078277] systemd-fstab-generator[2666]: Ignoring "noauto" for root device
	[  +0.076666] systemd-fstab-generator[2677]: Ignoring "noauto" for root device
	[  +0.081793] systemd-fstab-generator[2691]: Ignoring "noauto" for root device
	[  +2.335882] systemd-fstab-generator[2840]: Ignoring "noauto" for root device
	[  +2.841483] systemd-fstab-generator[3236]: Ignoring "noauto" for root device
	[  +1.523031] systemd-fstab-generator[3539]: Ignoring "noauto" for root device
	[ +19.491824] kauditd_printk_skb: 68 callbacks suppressed
	[Aug 4 01:04] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.330062] systemd-fstab-generator[9959]: Ignoring "noauto" for root device
	[  +5.133908] systemd-fstab-generator[10563]: Ignoring "noauto" for root device
	[  +0.440999] systemd-fstab-generator[10696]: Ignoring "noauto" for root device
	
	
	==> etcd [b561e504a901] <==
	{"level":"info","ts":"2024-08-04T01:04:30.523Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-08-04T01:04:30.523Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-08-04T01:04:30.523Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-04T01:04:30.523Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-04T01:04:30.523Z","caller":"etcdserver/server.go:736","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"f074a195de705325","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-08-04T01:04:30.523Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-08-04T01:04:30.523Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-08-04T01:04:31.022Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-04T01:04:31.022Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-04T01:04:31.022Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-08-04T01:04:31.022Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-08-04T01:04:31.022Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-08-04T01:04:31.022Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-08-04T01:04:31.022Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-08-04T01:04:31.023Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-359000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-04T01:04:31.023Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-04T01:04:31.023Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-04T01:04:31.024Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-04T01:04:31.030Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-04T01:04:31.030Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-08-04T01:04:31.030Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-04T01:04:31.030Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-04T01:04:31.043Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-04T01:04:31.043Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-04T01:04:31.043Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 01:08:55 up 9 min,  0 users,  load average: 0.34, 0.44, 0.23
	Linux running-upgrade-359000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [61b3a63eaddc] <==
	I0804 01:04:32.285288       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0804 01:04:32.296907       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0804 01:04:32.296967       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0804 01:04:32.297010       1 cache.go:39] Caches are synced for autoregister controller
	I0804 01:04:32.297066       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0804 01:04:32.296908       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0804 01:04:32.338936       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0804 01:04:33.027433       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0804 01:04:33.201361       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0804 01:04:33.204783       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0804 01:04:33.204798       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0804 01:04:33.342715       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0804 01:04:33.352597       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0804 01:04:33.464819       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0804 01:04:33.467776       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0804 01:04:33.468177       1 controller.go:611] quota admission added evaluator for: endpoints
	I0804 01:04:33.469550       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0804 01:04:34.332041       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0804 01:04:34.658235       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0804 01:04:34.662911       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0804 01:04:34.679350       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0804 01:04:34.719762       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0804 01:04:47.886736       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0804 01:04:48.086564       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0804 01:04:48.569902       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [01a42d8e56a8] <==
	W0804 01:04:47.240628       1 node_lifecycle_controller.go:1014] Missing timestamp for Node running-upgrade-359000. Assuming now as a timestamp.
	I0804 01:04:47.240643       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0804 01:04:47.240648       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0804 01:04:47.240666       1 event.go:294] "Event occurred" object="running-upgrade-359000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-359000 event: Registered Node running-upgrade-359000 in Controller"
	I0804 01:04:47.282528       1 shared_informer.go:262] Caches are synced for disruption
	I0804 01:04:47.282619       1 disruption.go:371] Sending events to api server.
	I0804 01:04:47.283589       1 shared_informer.go:262] Caches are synced for daemon sets
	I0804 01:04:47.304150       1 shared_informer.go:262] Caches are synced for TTL after finished
	I0804 01:04:47.343529       1 shared_informer.go:262] Caches are synced for expand
	I0804 01:04:47.382086       1 shared_informer.go:262] Caches are synced for PVC protection
	I0804 01:04:47.382112       1 shared_informer.go:262] Caches are synced for cronjob
	I0804 01:04:47.382169       1 shared_informer.go:262] Caches are synced for job
	I0804 01:04:47.387118       1 shared_informer.go:262] Caches are synced for ephemeral
	I0804 01:04:47.390254       1 shared_informer.go:262] Caches are synced for resource quota
	I0804 01:04:47.421615       1 shared_informer.go:262] Caches are synced for resource quota
	I0804 01:04:47.432438       1 shared_informer.go:262] Caches are synced for attach detach
	I0804 01:04:47.434795       1 shared_informer.go:262] Caches are synced for persistent volume
	I0804 01:04:47.435773       1 shared_informer.go:262] Caches are synced for stateful set
	I0804 01:04:47.809679       1 shared_informer.go:262] Caches are synced for garbage collector
	I0804 01:04:47.828406       1 shared_informer.go:262] Caches are synced for garbage collector
	I0804 01:04:47.828414       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0804 01:04:47.888108       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0804 01:04:48.091467       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-nl8p2"
	I0804 01:04:48.188450       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-dgxsg"
	I0804 01:04:48.190944       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-m89fc"
	
	
	==> kube-proxy [a422dda53c75] <==
	I0804 01:04:48.559322       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0804 01:04:48.559360       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0804 01:04:48.559370       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0804 01:04:48.567850       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0804 01:04:48.567860       1 server_others.go:206] "Using iptables Proxier"
	I0804 01:04:48.567912       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0804 01:04:48.568041       1 server.go:661] "Version info" version="v1.24.1"
	I0804 01:04:48.568050       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0804 01:04:48.568297       1 config.go:317] "Starting service config controller"
	I0804 01:04:48.568308       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0804 01:04:48.568345       1 config.go:226] "Starting endpoint slice config controller"
	I0804 01:04:48.568356       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0804 01:04:48.568592       1 config.go:444] "Starting node config controller"
	I0804 01:04:48.568616       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0804 01:04:48.668886       1 shared_informer.go:262] Caches are synced for node config
	I0804 01:04:48.668907       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0804 01:04:48.668886       1 shared_informer.go:262] Caches are synced for service config
	
	
	==> kube-scheduler [1d5103a7d136] <==
	W0804 01:04:32.259784       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0804 01:04:32.259792       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0804 01:04:32.259842       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0804 01:04:32.259846       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0804 01:04:32.259872       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0804 01:04:32.259879       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0804 01:04:32.259933       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0804 01:04:32.259941       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0804 01:04:32.259969       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0804 01:04:32.259975       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0804 01:04:32.259981       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0804 01:04:32.260064       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0804 01:04:33.078386       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0804 01:04:33.078444       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0804 01:04:33.106931       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0804 01:04:33.107014       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0804 01:04:33.154314       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0804 01:04:33.154349       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0804 01:04:33.173486       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0804 01:04:33.173524       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0804 01:04:33.205131       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0804 01:04:33.205161       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0804 01:04:33.220055       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0804 01:04:33.220083       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0804 01:04:35.657141       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Sun 2024-08-04 00:59:43 UTC, ends at Sun 2024-08-04 01:08:55 UTC. --
	Aug 04 01:04:35 running-upgrade-359000 kubelet[10569]: I0804 01:04:35.704868   10569 apiserver.go:52] "Watching apiserver"
	Aug 04 01:04:36 running-upgrade-359000 kubelet[10569]: I0804 01:04:36.128654   10569 reconciler.go:157] "Reconciler: start to sync state"
	Aug 04 01:04:36 running-upgrade-359000 kubelet[10569]: E0804 01:04:36.290590   10569 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-running-upgrade-359000\" already exists" pod="kube-system/kube-scheduler-running-upgrade-359000"
	Aug 04 01:04:36 running-upgrade-359000 kubelet[10569]: E0804 01:04:36.489537   10569 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-running-upgrade-359000\" already exists" pod="kube-system/kube-controller-manager-running-upgrade-359000"
	Aug 04 01:04:36 running-upgrade-359000 kubelet[10569]: E0804 01:04:36.690711   10569 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-running-upgrade-359000\" already exists" pod="kube-system/etcd-running-upgrade-359000"
	Aug 04 01:04:36 running-upgrade-359000 kubelet[10569]: I0804 01:04:36.887365   10569 request.go:601] Waited for 1.112419232s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Aug 04 01:04:36 running-upgrade-359000 kubelet[10569]: E0804 01:04:36.890501   10569 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-running-upgrade-359000\" already exists" pod="kube-system/kube-apiserver-running-upgrade-359000"
	Aug 04 01:04:47 running-upgrade-359000 kubelet[10569]: I0804 01:04:47.196436   10569 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 04 01:04:47 running-upgrade-359000 kubelet[10569]: I0804 01:04:47.196733   10569 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 04 01:04:47 running-upgrade-359000 kubelet[10569]: I0804 01:04:47.245629   10569 topology_manager.go:200] "Topology Admit Handler"
	Aug 04 01:04:47 running-upgrade-359000 kubelet[10569]: I0804 01:04:47.398728   10569 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdz64\" (UniqueName: \"kubernetes.io/projected/31d6ea8f-b604-4b95-8fb9-9b03e19b79b0-kube-api-access-pdz64\") pod \"storage-provisioner\" (UID: \"31d6ea8f-b604-4b95-8fb9-9b03e19b79b0\") " pod="kube-system/storage-provisioner"
	Aug 04 01:04:47 running-upgrade-359000 kubelet[10569]: I0804 01:04:47.398757   10569 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/31d6ea8f-b604-4b95-8fb9-9b03e19b79b0-tmp\") pod \"storage-provisioner\" (UID: \"31d6ea8f-b604-4b95-8fb9-9b03e19b79b0\") " pod="kube-system/storage-provisioner"
	Aug 04 01:04:48 running-upgrade-359000 kubelet[10569]: I0804 01:04:48.094928   10569 topology_manager.go:200] "Topology Admit Handler"
	Aug 04 01:04:48 running-upgrade-359000 kubelet[10569]: I0804 01:04:48.103273   10569 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e368e6e7-4492-409f-a589-abfda6583bde-xtables-lock\") pod \"kube-proxy-nl8p2\" (UID: \"e368e6e7-4492-409f-a589-abfda6583bde\") " pod="kube-system/kube-proxy-nl8p2"
	Aug 04 01:04:48 running-upgrade-359000 kubelet[10569]: I0804 01:04:48.103293   10569 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e368e6e7-4492-409f-a589-abfda6583bde-lib-modules\") pod \"kube-proxy-nl8p2\" (UID: \"e368e6e7-4492-409f-a589-abfda6583bde\") " pod="kube-system/kube-proxy-nl8p2"
	Aug 04 01:04:48 running-upgrade-359000 kubelet[10569]: I0804 01:04:48.103303   10569 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e368e6e7-4492-409f-a589-abfda6583bde-kube-proxy\") pod \"kube-proxy-nl8p2\" (UID: \"e368e6e7-4492-409f-a589-abfda6583bde\") " pod="kube-system/kube-proxy-nl8p2"
	Aug 04 01:04:48 running-upgrade-359000 kubelet[10569]: I0804 01:04:48.103313   10569 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t496d\" (UniqueName: \"kubernetes.io/projected/e368e6e7-4492-409f-a589-abfda6583bde-kube-api-access-t496d\") pod \"kube-proxy-nl8p2\" (UID: \"e368e6e7-4492-409f-a589-abfda6583bde\") " pod="kube-system/kube-proxy-nl8p2"
	Aug 04 01:04:48 running-upgrade-359000 kubelet[10569]: I0804 01:04:48.192243   10569 topology_manager.go:200] "Topology Admit Handler"
	Aug 04 01:04:48 running-upgrade-359000 kubelet[10569]: I0804 01:04:48.197545   10569 topology_manager.go:200] "Topology Admit Handler"
	Aug 04 01:04:48 running-upgrade-359000 kubelet[10569]: I0804 01:04:48.303700   10569 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aed44c2f-7114-432e-8176-62aeefb97e48-config-volume\") pod \"coredns-6d4b75cb6d-m89fc\" (UID: \"aed44c2f-7114-432e-8176-62aeefb97e48\") " pod="kube-system/coredns-6d4b75cb6d-m89fc"
	Aug 04 01:04:48 running-upgrade-359000 kubelet[10569]: I0804 01:04:48.303725   10569 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a972626-fc7d-45f5-a49d-86be570d764b-config-volume\") pod \"coredns-6d4b75cb6d-dgxsg\" (UID: \"7a972626-fc7d-45f5-a49d-86be570d764b\") " pod="kube-system/coredns-6d4b75cb6d-dgxsg"
	Aug 04 01:04:48 running-upgrade-359000 kubelet[10569]: I0804 01:04:48.303739   10569 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmmsr\" (UniqueName: \"kubernetes.io/projected/aed44c2f-7114-432e-8176-62aeefb97e48-kube-api-access-fmmsr\") pod \"coredns-6d4b75cb6d-m89fc\" (UID: \"aed44c2f-7114-432e-8176-62aeefb97e48\") " pod="kube-system/coredns-6d4b75cb6d-m89fc"
	Aug 04 01:04:48 running-upgrade-359000 kubelet[10569]: I0804 01:04:48.303751   10569 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xp6nm\" (UniqueName: \"kubernetes.io/projected/7a972626-fc7d-45f5-a49d-86be570d764b-kube-api-access-xp6nm\") pod \"coredns-6d4b75cb6d-dgxsg\" (UID: \"7a972626-fc7d-45f5-a49d-86be570d764b\") " pod="kube-system/coredns-6d4b75cb6d-dgxsg"
	Aug 04 01:08:37 running-upgrade-359000 kubelet[10569]: I0804 01:08:37.026113   10569 scope.go:110] "RemoveContainer" containerID="bcbb40889ca3d93e56d0a701e44a3173245965614c639ccb9edd6b24d0e34e7a"
	Aug 04 01:08:37 running-upgrade-359000 kubelet[10569]: I0804 01:08:37.048093   10569 scope.go:110] "RemoveContainer" containerID="3d1437e6d6fcf280d246bc1f8694076a693e43ed0fcdaf8551c6019311cd953d"
	
	
	==> storage-provisioner [3318cf46c892] <==
	I0804 01:04:47.759620       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0804 01:04:47.763420       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0804 01:04:47.763437       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0804 01:04:47.766526       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0804 01:04:47.766771       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-359000_9f4223e9-1991-4f5d-9e55-725a4df87f04!
	I0804 01:04:47.767092       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7fad96f9-7a33-48b5-b01f-31a24744a4bc", APIVersion:"v1", ResourceVersion:"313", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-359000_9f4223e9-1991-4f5d-9e55-725a4df87f04 became leader
	I0804 01:04:47.867600       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-359000_9f4223e9-1991-4f5d-9e55-725a4df87f04!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-359000 -n running-upgrade-359000
E0803 18:09:05.724917    1673 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/addons-989000/client.crt: no such file or directory
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-359000 -n running-upgrade-359000: exit status 2 (15.705444166s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-359000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-359000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-359000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-arm64 delete -p running-upgrade-359000: (1.152776625s)
--- FAIL: TestRunningBinaryUpgrade (595.56s)

                                                
                                    
x
+
TestKubernetesUpgrade (18.01s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-366000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-366000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.744571041s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-366000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-366000" primary control-plane node in "kubernetes-upgrade-366000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-366000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 18:02:17.262659    4559 out.go:291] Setting OutFile to fd 1 ...
	I0803 18:02:17.262799    4559 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:02:17.262803    4559 out.go:304] Setting ErrFile to fd 2...
	I0803 18:02:17.262805    4559 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:02:17.262929    4559 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1166/.minikube/bin
	I0803 18:02:17.264040    4559 out.go:298] Setting JSON to false
	I0803 18:02:17.280219    4559 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3701,"bootTime":1722729636,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 18:02:17.280287    4559 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 18:02:17.286476    4559 out.go:177] * [kubernetes-upgrade-366000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0803 18:02:17.294259    4559 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 18:02:17.294301    4559 notify.go:220] Checking for updates...
	I0803 18:02:17.301411    4559 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	I0803 18:02:17.302744    4559 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 18:02:17.305398    4559 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 18:02:17.308449    4559 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	I0803 18:02:17.311452    4559 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 18:02:17.314659    4559 config.go:182] Loaded profile config "multinode-483000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 18:02:17.314727    4559 config.go:182] Loaded profile config "running-upgrade-359000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0803 18:02:17.314791    4559 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 18:02:17.319381    4559 out.go:177] * Using the qemu2 driver based on user configuration
	I0803 18:02:17.326389    4559 start.go:297] selected driver: qemu2
	I0803 18:02:17.326394    4559 start.go:901] validating driver "qemu2" against <nil>
	I0803 18:02:17.326400    4559 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 18:02:17.328640    4559 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0803 18:02:17.331355    4559 out.go:177] * Automatically selected the socket_vmnet network
	I0803 18:02:17.334507    4559 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0803 18:02:17.334550    4559 cni.go:84] Creating CNI manager for ""
	I0803 18:02:17.334558    4559 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0803 18:02:17.334591    4559 start.go:340] cluster config:
	{Name:kubernetes-upgrade-366000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-366000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 18:02:17.338204    4559 iso.go:125] acquiring lock: {Name:mk5fb439c48468a4ea1254631a9a137d32075713 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 18:02:17.345432    4559 out.go:177] * Starting "kubernetes-upgrade-366000" primary control-plane node in "kubernetes-upgrade-366000" cluster
	I0803 18:02:17.348356    4559 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0803 18:02:17.348370    4559 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0803 18:02:17.348379    4559 cache.go:56] Caching tarball of preloaded images
	I0803 18:02:17.348432    4559 preload.go:172] Found /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0803 18:02:17.348437    4559 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0803 18:02:17.348487    4559 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/kubernetes-upgrade-366000/config.json ...
	I0803 18:02:17.348498    4559 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/kubernetes-upgrade-366000/config.json: {Name:mk85a03560fbd94e6a985dd2b43fd33fc668b07a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 18:02:17.348798    4559 start.go:360] acquireMachinesLock for kubernetes-upgrade-366000: {Name:mk8118ba0191912f8ddd157ebc629e1eb9051406 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 18:02:17.348834    4559 start.go:364] duration metric: took 27.958µs to acquireMachinesLock for "kubernetes-upgrade-366000"
	I0803 18:02:17.348844    4559 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-366000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-366000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 18:02:17.348877    4559 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 18:02:17.356277    4559 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0803 18:02:17.373566    4559 start.go:159] libmachine.API.Create for "kubernetes-upgrade-366000" (driver="qemu2")
	I0803 18:02:17.373599    4559 client.go:168] LocalClient.Create starting
	I0803 18:02:17.373682    4559 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/ca.pem
	I0803 18:02:17.373720    4559 main.go:141] libmachine: Decoding PEM data...
	I0803 18:02:17.373732    4559 main.go:141] libmachine: Parsing certificate...
	I0803 18:02:17.373778    4559 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/cert.pem
	I0803 18:02:17.373804    4559 main.go:141] libmachine: Decoding PEM data...
	I0803 18:02:17.373811    4559 main.go:141] libmachine: Parsing certificate...
	I0803 18:02:17.374214    4559 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1166/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 18:02:17.532094    4559 main.go:141] libmachine: Creating SSH key...
	I0803 18:02:17.568054    4559 main.go:141] libmachine: Creating Disk image...
	I0803 18:02:17.568061    4559 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 18:02:17.568289    4559 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/kubernetes-upgrade-366000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/kubernetes-upgrade-366000/disk.qcow2
	I0803 18:02:17.577968    4559 main.go:141] libmachine: STDOUT: 
	I0803 18:02:17.577987    4559 main.go:141] libmachine: STDERR: 
	I0803 18:02:17.578038    4559 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/kubernetes-upgrade-366000/disk.qcow2 +20000M
	I0803 18:02:17.586299    4559 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 18:02:17.586315    4559 main.go:141] libmachine: STDERR: 
	I0803 18:02:17.586326    4559 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/kubernetes-upgrade-366000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/kubernetes-upgrade-366000/disk.qcow2
	I0803 18:02:17.586331    4559 main.go:141] libmachine: Starting QEMU VM...
	I0803 18:02:17.586352    4559 qemu.go:418] Using hvf for hardware acceleration
	I0803 18:02:17.586379    4559 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/kubernetes-upgrade-366000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/kubernetes-upgrade-366000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/kubernetes-upgrade-366000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:98:4c:af:f7:25 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/kubernetes-upgrade-366000/disk.qcow2
	I0803 18:02:17.588078    4559 main.go:141] libmachine: STDOUT: 
	I0803 18:02:17.588093    4559 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 18:02:17.588109    4559 client.go:171] duration metric: took 214.508917ms to LocalClient.Create
	I0803 18:02:19.590232    4559 start.go:128] duration metric: took 2.241373541s to createHost
	I0803 18:02:19.590261    4559 start.go:83] releasing machines lock for "kubernetes-upgrade-366000", held for 2.241451792s
	W0803 18:02:19.590295    4559 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 18:02:19.595415    4559 out.go:177] * Deleting "kubernetes-upgrade-366000" in qemu2 ...
	W0803 18:02:19.619580    4559 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 18:02:19.619593    4559 start.go:729] Will try again in 5 seconds ...
	I0803 18:02:24.621539    4559 start.go:360] acquireMachinesLock for kubernetes-upgrade-366000: {Name:mk8118ba0191912f8ddd157ebc629e1eb9051406 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 18:02:24.621660    4559 start.go:364] duration metric: took 94.375µs to acquireMachinesLock for "kubernetes-upgrade-366000"
	I0803 18:02:24.621676    4559 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-366000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-366000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 18:02:24.621723    4559 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 18:02:24.625022    4559 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0803 18:02:24.641452    4559 start.go:159] libmachine.API.Create for "kubernetes-upgrade-366000" (driver="qemu2")
	I0803 18:02:24.641487    4559 client.go:168] LocalClient.Create starting
	I0803 18:02:24.641562    4559 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/ca.pem
	I0803 18:02:24.641602    4559 main.go:141] libmachine: Decoding PEM data...
	I0803 18:02:24.641610    4559 main.go:141] libmachine: Parsing certificate...
	I0803 18:02:24.641649    4559 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/cert.pem
	I0803 18:02:24.641673    4559 main.go:141] libmachine: Decoding PEM data...
	I0803 18:02:24.641683    4559 main.go:141] libmachine: Parsing certificate...
	I0803 18:02:24.641950    4559 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1166/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 18:02:24.798399    4559 main.go:141] libmachine: Creating SSH key...
	I0803 18:02:24.917954    4559 main.go:141] libmachine: Creating Disk image...
	I0803 18:02:24.917965    4559 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 18:02:24.920148    4559 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/kubernetes-upgrade-366000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/kubernetes-upgrade-366000/disk.qcow2
	I0803 18:02:24.929506    4559 main.go:141] libmachine: STDOUT: 
	I0803 18:02:24.929521    4559 main.go:141] libmachine: STDERR: 
	I0803 18:02:24.929581    4559 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/kubernetes-upgrade-366000/disk.qcow2 +20000M
	I0803 18:02:24.937620    4559 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 18:02:24.937634    4559 main.go:141] libmachine: STDERR: 
	I0803 18:02:24.937644    4559 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/kubernetes-upgrade-366000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/kubernetes-upgrade-366000/disk.qcow2
	I0803 18:02:24.937654    4559 main.go:141] libmachine: Starting QEMU VM...
	I0803 18:02:24.937665    4559 qemu.go:418] Using hvf for hardware acceleration
	I0803 18:02:24.937697    4559 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/kubernetes-upgrade-366000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/kubernetes-upgrade-366000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/kubernetes-upgrade-366000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:fa:34:83:b5:6e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/kubernetes-upgrade-366000/disk.qcow2
	I0803 18:02:24.939396    4559 main.go:141] libmachine: STDOUT: 
	I0803 18:02:24.939416    4559 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 18:02:24.939430    4559 client.go:171] duration metric: took 297.94275ms to LocalClient.Create
	I0803 18:02:26.941606    4559 start.go:128] duration metric: took 2.31989925s to createHost
	I0803 18:02:26.941681    4559 start.go:83] releasing machines lock for "kubernetes-upgrade-366000", held for 2.320053791s
	W0803 18:02:26.942008    4559 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-366000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-366000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 18:02:26.950777    4559 out.go:177] 
	W0803 18:02:26.954903    4559 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 18:02:26.954953    4559 out.go:239] * 
	* 
	W0803 18:02:26.957721    4559 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 18:02:26.967807    4559 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-366000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-366000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-366000: (2.858910917s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-366000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-366000 status --format={{.Host}}: exit status 7 (62.042875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-366000 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-366000 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.189332792s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-366000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-366000" primary control-plane node in "kubernetes-upgrade-366000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-366000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-366000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 18:02:29.931802    4593 out.go:291] Setting OutFile to fd 1 ...
	I0803 18:02:29.931975    4593 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:02:29.931979    4593 out.go:304] Setting ErrFile to fd 2...
	I0803 18:02:29.931981    4593 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:02:29.932104    4593 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1166/.minikube/bin
	I0803 18:02:29.933151    4593 out.go:298] Setting JSON to false
	I0803 18:02:29.949309    4593 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3713,"bootTime":1722729636,"procs":456,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 18:02:29.949378    4593 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 18:02:29.953459    4593 out.go:177] * [kubernetes-upgrade-366000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0803 18:02:29.960321    4593 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 18:02:29.960358    4593 notify.go:220] Checking for updates...
	I0803 18:02:29.968189    4593 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	I0803 18:02:29.971314    4593 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 18:02:29.975329    4593 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 18:02:29.976571    4593 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	I0803 18:02:29.979288    4593 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 18:02:29.982566    4593 config.go:182] Loaded profile config "kubernetes-upgrade-366000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0803 18:02:29.982828    4593 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 18:02:29.986167    4593 out.go:177] * Using the qemu2 driver based on existing profile
	I0803 18:02:29.993373    4593 start.go:297] selected driver: qemu2
	I0803 18:02:29.993380    4593 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-366000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-366000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 18:02:29.993424    4593 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 18:02:29.995676    4593 cni.go:84] Creating CNI manager for ""
	I0803 18:02:29.995690    4593 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0803 18:02:29.995711    4593 start.go:340] cluster config:
	{Name:kubernetes-upgrade-366000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:kubernetes-upgrade-366000 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 18:02:29.999052    4593 iso.go:125] acquiring lock: {Name:mk5fb439c48468a4ea1254631a9a137d32075713 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 18:02:30.007329    4593 out.go:177] * Starting "kubernetes-upgrade-366000" primary control-plane node in "kubernetes-upgrade-366000" cluster
	I0803 18:02:30.011325    4593 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0803 18:02:30.011350    4593 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4
	I0803 18:02:30.011363    4593 cache.go:56] Caching tarball of preloaded images
	I0803 18:02:30.011426    4593 preload.go:172] Found /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0803 18:02:30.011432    4593 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-rc.0 on docker
	I0803 18:02:30.011494    4593 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/kubernetes-upgrade-366000/config.json ...
	I0803 18:02:30.011860    4593 start.go:360] acquireMachinesLock for kubernetes-upgrade-366000: {Name:mk8118ba0191912f8ddd157ebc629e1eb9051406 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 18:02:30.011886    4593 start.go:364] duration metric: took 20.25µs to acquireMachinesLock for "kubernetes-upgrade-366000"
	I0803 18:02:30.011894    4593 start.go:96] Skipping create...Using existing machine configuration
	I0803 18:02:30.011900    4593 fix.go:54] fixHost starting: 
	I0803 18:02:30.012008    4593 fix.go:112] recreateIfNeeded on kubernetes-upgrade-366000: state=Stopped err=<nil>
	W0803 18:02:30.012018    4593 fix.go:138] unexpected machine state, will restart: <nil>
	I0803 18:02:30.016337    4593 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-366000" ...
	I0803 18:02:30.024325    4593 qemu.go:418] Using hvf for hardware acceleration
	I0803 18:02:30.024360    4593 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/kubernetes-upgrade-366000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/kubernetes-upgrade-366000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/kubernetes-upgrade-366000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:fa:34:83:b5:6e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/kubernetes-upgrade-366000/disk.qcow2
	I0803 18:02:30.026287    4593 main.go:141] libmachine: STDOUT: 
	I0803 18:02:30.026305    4593 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 18:02:30.026334    4593 fix.go:56] duration metric: took 14.434042ms for fixHost
	I0803 18:02:30.026338    4593 start.go:83] releasing machines lock for "kubernetes-upgrade-366000", held for 14.448167ms
	W0803 18:02:30.026345    4593 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 18:02:30.026380    4593 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 18:02:30.026384    4593 start.go:729] Will try again in 5 seconds ...
	I0803 18:02:35.028518    4593 start.go:360] acquireMachinesLock for kubernetes-upgrade-366000: {Name:mk8118ba0191912f8ddd157ebc629e1eb9051406 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 18:02:35.029092    4593 start.go:364] duration metric: took 396.791µs to acquireMachinesLock for "kubernetes-upgrade-366000"
	I0803 18:02:35.029233    4593 start.go:96] Skipping create...Using existing machine configuration
	I0803 18:02:35.029257    4593 fix.go:54] fixHost starting: 
	I0803 18:02:35.029998    4593 fix.go:112] recreateIfNeeded on kubernetes-upgrade-366000: state=Stopped err=<nil>
	W0803 18:02:35.030025    4593 fix.go:138] unexpected machine state, will restart: <nil>
	I0803 18:02:35.034420    4593 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-366000" ...
	I0803 18:02:35.041553    4593 qemu.go:418] Using hvf for hardware acceleration
	I0803 18:02:35.041806    4593 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/kubernetes-upgrade-366000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/kubernetes-upgrade-366000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/kubernetes-upgrade-366000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:fa:34:83:b5:6e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/kubernetes-upgrade-366000/disk.qcow2
	I0803 18:02:35.051681    4593 main.go:141] libmachine: STDOUT: 
	I0803 18:02:35.051756    4593 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 18:02:35.051867    4593 fix.go:56] duration metric: took 22.613209ms for fixHost
	I0803 18:02:35.051888    4593 start.go:83] releasing machines lock for "kubernetes-upgrade-366000", held for 22.735208ms
	W0803 18:02:35.052059    4593 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-366000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-366000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 18:02:35.060576    4593 out.go:177] 
	W0803 18:02:35.064508    4593 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 18:02:35.064545    4593 out.go:239] * 
	* 
	W0803 18:02:35.066207    4593 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 18:02:35.081490    4593 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-366000 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-366000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-366000 version --output=json: exit status 1 (39.557625ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-366000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-08-03 18:02:35.131983 -0700 PDT m=+2560.366085126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-366000 -n kubernetes-upgrade-366000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-366000 -n kubernetes-upgrade-366000: exit status 7 (29.221416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-366000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-366000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-366000
--- FAIL: TestKubernetesUpgrade (18.01s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.76s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19364
- KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3811306619/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.76s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.19s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19364
- KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2602448673/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.19s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (573.8s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.1256295832 start -p stopped-upgrade-413000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.1256295832 start -p stopped-upgrade-413000 --memory=2200 --vm-driver=qemu2 : (39.661764292s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.1256295832 -p stopped-upgrade-413000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.1256295832 -p stopped-upgrade-413000 stop: (12.120528458s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-413000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0803 18:04:05.733113    1673 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/addons-989000/client.crt: no such file or directory
E0803 18:06:23.562217    1673 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/functional-959000/client.crt: no such file or directory
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-413000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m41.899994625s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-413000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-413000" primary control-plane node in "stopped-upgrade-413000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-413000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 18:03:28.010602    4630 out.go:291] Setting OutFile to fd 1 ...
	I0803 18:03:28.010796    4630 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:03:28.010800    4630 out.go:304] Setting ErrFile to fd 2...
	I0803 18:03:28.010803    4630 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:03:28.010981    4630 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1166/.minikube/bin
	I0803 18:03:28.012131    4630 out.go:298] Setting JSON to false
	I0803 18:03:28.031469    4630 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3772,"bootTime":1722729636,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 18:03:28.031550    4630 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 18:03:28.036816    4630 out.go:177] * [stopped-upgrade-413000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0803 18:03:28.043743    4630 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 18:03:28.043806    4630 notify.go:220] Checking for updates...
	I0803 18:03:28.051778    4630 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	I0803 18:03:28.054730    4630 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 18:03:28.057785    4630 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 18:03:28.060785    4630 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	I0803 18:03:28.063787    4630 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 18:03:28.067013    4630 config.go:182] Loaded profile config "stopped-upgrade-413000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0803 18:03:28.069744    4630 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0803 18:03:28.072741    4630 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 18:03:28.076784    4630 out.go:177] * Using the qemu2 driver based on existing profile
	I0803 18:03:28.082734    4630 start.go:297] selected driver: qemu2
	I0803 18:03:28.082739    4630 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-413000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50497 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-413000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0803 18:03:28.082788    4630 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 18:03:28.085412    4630 cni.go:84] Creating CNI manager for ""
	I0803 18:03:28.085430    4630 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0803 18:03:28.085456    4630 start.go:340] cluster config:
	{Name:stopped-upgrade-413000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50497 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-413000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0803 18:03:28.085509    4630 iso.go:125] acquiring lock: {Name:mk5fb439c48468a4ea1254631a9a137d32075713 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 18:03:28.093755    4630 out.go:177] * Starting "stopped-upgrade-413000" primary control-plane node in "stopped-upgrade-413000" cluster
	I0803 18:03:28.097738    4630 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0803 18:03:28.097756    4630 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0803 18:03:28.097768    4630 cache.go:56] Caching tarball of preloaded images
	I0803 18:03:28.097826    4630 preload.go:172] Found /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0803 18:03:28.097831    4630 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0803 18:03:28.097892    4630 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/stopped-upgrade-413000/config.json ...
	I0803 18:03:28.098320    4630 start.go:360] acquireMachinesLock for stopped-upgrade-413000: {Name:mk8118ba0191912f8ddd157ebc629e1eb9051406 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 18:03:28.098355    4630 start.go:364] duration metric: took 28.292µs to acquireMachinesLock for "stopped-upgrade-413000"
	I0803 18:03:28.098365    4630 start.go:96] Skipping create...Using existing machine configuration
	I0803 18:03:28.098370    4630 fix.go:54] fixHost starting: 
	I0803 18:03:28.098487    4630 fix.go:112] recreateIfNeeded on stopped-upgrade-413000: state=Stopped err=<nil>
	W0803 18:03:28.098495    4630 fix.go:138] unexpected machine state, will restart: <nil>
	I0803 18:03:28.106742    4630 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-413000" ...
	I0803 18:03:28.110752    4630 qemu.go:418] Using hvf for hardware acceleration
	I0803 18:03:28.110824    4630 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/stopped-upgrade-413000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/stopped-upgrade-413000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/stopped-upgrade-413000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50464-:22,hostfwd=tcp::50465-:2376,hostname=stopped-upgrade-413000 -daemonize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/stopped-upgrade-413000/disk.qcow2
	I0803 18:03:28.159042    4630 main.go:141] libmachine: STDOUT: 
	I0803 18:03:28.159066    4630 main.go:141] libmachine: STDERR: 
	I0803 18:03:28.159072    4630 main.go:141] libmachine: Waiting for VM to start (ssh -p 50464 docker@127.0.0.1)...
	I0803 18:03:48.646268    4630 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/stopped-upgrade-413000/config.json ...
	I0803 18:03:48.647024    4630 machine.go:94] provisionDockerMachine start ...
	I0803 18:03:48.647193    4630 main.go:141] libmachine: Using SSH client type: native
	I0803 18:03:48.647672    4630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10060aa10] 0x10060d270 <nil>  [] 0s} localhost 50464 <nil> <nil>}
	I0803 18:03:48.647685    4630 main.go:141] libmachine: About to run SSH command:
	hostname
	I0803 18:03:48.731080    4630 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0803 18:03:48.731114    4630 buildroot.go:166] provisioning hostname "stopped-upgrade-413000"
	I0803 18:03:48.731249    4630 main.go:141] libmachine: Using SSH client type: native
	I0803 18:03:48.731516    4630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10060aa10] 0x10060d270 <nil>  [] 0s} localhost 50464 <nil> <nil>}
	I0803 18:03:48.731527    4630 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-413000 && echo "stopped-upgrade-413000" | sudo tee /etc/hostname
	I0803 18:03:48.809412    4630 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-413000
	
	I0803 18:03:48.809496    4630 main.go:141] libmachine: Using SSH client type: native
	I0803 18:03:48.809670    4630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10060aa10] 0x10060d270 <nil>  [] 0s} localhost 50464 <nil> <nil>}
	I0803 18:03:48.809683    4630 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-413000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-413000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-413000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0803 18:03:48.881428    4630 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0803 18:03:48.881441    4630 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19364-1166/.minikube CaCertPath:/Users/jenkins/minikube-integration/19364-1166/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19364-1166/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19364-1166/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19364-1166/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19364-1166/.minikube}
	I0803 18:03:48.881449    4630 buildroot.go:174] setting up certificates
	I0803 18:03:48.881455    4630 provision.go:84] configureAuth start
	I0803 18:03:48.881461    4630 provision.go:143] copyHostCerts
	I0803 18:03:48.881545    4630 exec_runner.go:144] found /Users/jenkins/minikube-integration/19364-1166/.minikube/ca.pem, removing ...
	I0803 18:03:48.881553    4630 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19364-1166/.minikube/ca.pem
	I0803 18:03:48.881725    4630 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19364-1166/.minikube/ca.pem (1082 bytes)
	I0803 18:03:48.881950    4630 exec_runner.go:144] found /Users/jenkins/minikube-integration/19364-1166/.minikube/cert.pem, removing ...
	I0803 18:03:48.881955    4630 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19364-1166/.minikube/cert.pem
	I0803 18:03:48.882023    4630 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19364-1166/.minikube/cert.pem (1123 bytes)
	I0803 18:03:48.882163    4630 exec_runner.go:144] found /Users/jenkins/minikube-integration/19364-1166/.minikube/key.pem, removing ...
	I0803 18:03:48.882167    4630 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19364-1166/.minikube/key.pem
	I0803 18:03:48.882236    4630 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19364-1166/.minikube/key.pem (1675 bytes)
	I0803 18:03:48.882346    4630 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19364-1166/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19364-1166/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-413000 san=[127.0.0.1 localhost minikube stopped-upgrade-413000]
	I0803 18:03:48.982049    4630 provision.go:177] copyRemoteCerts
	I0803 18:03:48.982095    4630 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0803 18:03:48.982104    4630 sshutil.go:53] new ssh client: &{IP:localhost Port:50464 SSHKeyPath:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/stopped-upgrade-413000/id_rsa Username:docker}
	I0803 18:03:49.015468    4630 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0803 18:03:49.022285    4630 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0803 18:03:49.028689    4630 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0803 18:03:49.035887    4630 provision.go:87] duration metric: took 154.430333ms to configureAuth
	I0803 18:03:49.035895    4630 buildroot.go:189] setting minikube options for container-runtime
	I0803 18:03:49.036007    4630 config.go:182] Loaded profile config "stopped-upgrade-413000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0803 18:03:49.036040    4630 main.go:141] libmachine: Using SSH client type: native
	I0803 18:03:49.036147    4630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10060aa10] 0x10060d270 <nil>  [] 0s} localhost 50464 <nil> <nil>}
	I0803 18:03:49.036156    4630 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0803 18:03:49.097984    4630 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0803 18:03:49.097999    4630 buildroot.go:70] root file system type: tmpfs
	I0803 18:03:49.098051    4630 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0803 18:03:49.098110    4630 main.go:141] libmachine: Using SSH client type: native
	I0803 18:03:49.098230    4630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10060aa10] 0x10060d270 <nil>  [] 0s} localhost 50464 <nil> <nil>}
	I0803 18:03:49.098263    4630 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0803 18:03:49.161499    4630 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0803 18:03:49.161555    4630 main.go:141] libmachine: Using SSH client type: native
	I0803 18:03:49.161671    4630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10060aa10] 0x10060d270 <nil>  [] 0s} localhost 50464 <nil> <nil>}
	I0803 18:03:49.161680    4630 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0803 18:03:49.529432    4630 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0803 18:03:49.529446    4630 machine.go:97] duration metric: took 882.434709ms to provisionDockerMachine
	I0803 18:03:49.529453    4630 start.go:293] postStartSetup for "stopped-upgrade-413000" (driver="qemu2")
	I0803 18:03:49.529460    4630 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0803 18:03:49.529522    4630 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0803 18:03:49.529532    4630 sshutil.go:53] new ssh client: &{IP:localhost Port:50464 SSHKeyPath:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/stopped-upgrade-413000/id_rsa Username:docker}
	I0803 18:03:49.564102    4630 ssh_runner.go:195] Run: cat /etc/os-release
	I0803 18:03:49.565324    4630 info.go:137] Remote host: Buildroot 2021.02.12
	I0803 18:03:49.565332    4630 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19364-1166/.minikube/addons for local assets ...
	I0803 18:03:49.565414    4630 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19364-1166/.minikube/files for local assets ...
	I0803 18:03:49.565557    4630 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19364-1166/.minikube/files/etc/ssl/certs/16732.pem -> 16732.pem in /etc/ssl/certs
	I0803 18:03:49.565683    4630 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0803 18:03:49.568488    4630 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1166/.minikube/files/etc/ssl/certs/16732.pem --> /etc/ssl/certs/16732.pem (1708 bytes)
	I0803 18:03:49.575567    4630 start.go:296] duration metric: took 46.1105ms for postStartSetup
	I0803 18:03:49.575580    4630 fix.go:56] duration metric: took 21.477822875s for fixHost
	I0803 18:03:49.575613    4630 main.go:141] libmachine: Using SSH client type: native
	I0803 18:03:49.575721    4630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10060aa10] 0x10060d270 <nil>  [] 0s} localhost 50464 <nil> <nil>}
	I0803 18:03:49.575727    4630 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0803 18:03:49.636096    4630 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722733429.723419421
	
	I0803 18:03:49.636106    4630 fix.go:216] guest clock: 1722733429.723419421
	I0803 18:03:49.636110    4630 fix.go:229] Guest: 2024-08-03 18:03:49.723419421 -0700 PDT Remote: 2024-08-03 18:03:49.575581 -0700 PDT m=+21.594381418 (delta=147.838421ms)
	I0803 18:03:49.636121    4630 fix.go:200] guest clock delta is within tolerance: 147.838421ms
	I0803 18:03:49.636124    4630 start.go:83] releasing machines lock for "stopped-upgrade-413000", held for 21.538376667s
	I0803 18:03:49.636190    4630 ssh_runner.go:195] Run: cat /version.json
	I0803 18:03:49.636199    4630 sshutil.go:53] new ssh client: &{IP:localhost Port:50464 SSHKeyPath:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/stopped-upgrade-413000/id_rsa Username:docker}
	I0803 18:03:49.636367    4630 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0803 18:03:49.636384    4630 sshutil.go:53] new ssh client: &{IP:localhost Port:50464 SSHKeyPath:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/stopped-upgrade-413000/id_rsa Username:docker}
	W0803 18:03:49.636829    4630 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50464: connect: connection refused
	I0803 18:03:49.636850    4630 retry.go:31] will retry after 177.559602ms: dial tcp [::1]:50464: connect: connection refused
	W0803 18:03:49.668680    4630 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0803 18:03:49.668734    4630 ssh_runner.go:195] Run: systemctl --version
	I0803 18:03:49.670579    4630 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0803 18:03:49.672149    4630 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0803 18:03:49.672179    4630 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0803 18:03:49.674969    4630 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0803 18:03:49.679212    4630 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0803 18:03:49.679221    4630 start.go:495] detecting cgroup driver to use...
	I0803 18:03:49.679300    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0803 18:03:49.686252    4630 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0803 18:03:49.689125    4630 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0803 18:03:49.692300    4630 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0803 18:03:49.692329    4630 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0803 18:03:49.695565    4630 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0803 18:03:49.698437    4630 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0803 18:03:49.701225    4630 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0803 18:03:49.704511    4630 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0803 18:03:49.707576    4630 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0803 18:03:49.710418    4630 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0803 18:03:49.713336    4630 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0803 18:03:49.716358    4630 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0803 18:03:49.719280    4630 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0803 18:03:49.721726    4630 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 18:03:49.802614    4630 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0803 18:03:49.810454    4630 start.go:495] detecting cgroup driver to use...
	I0803 18:03:49.810522    4630 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0803 18:03:49.815314    4630 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0803 18:03:49.825657    4630 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0803 18:03:49.831533    4630 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0803 18:03:49.835989    4630 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0803 18:03:49.840203    4630 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0803 18:03:49.892300    4630 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0803 18:03:49.897607    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0803 18:03:49.904063    4630 ssh_runner.go:195] Run: which cri-dockerd
	I0803 18:03:49.905304    4630 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0803 18:03:49.908377    4630 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0803 18:03:49.913377    4630 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0803 18:03:49.991123    4630 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0803 18:03:50.070278    4630 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0803 18:03:50.070350    4630 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0803 18:03:50.075756    4630 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 18:03:50.149438    4630 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0803 18:03:51.306596    4630 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.157175708s)
	I0803 18:03:51.306651    4630 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0803 18:03:51.311045    4630 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0803 18:03:51.316832    4630 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0803 18:03:51.386686    4630 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0803 18:03:51.466215    4630 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 18:03:51.547718    4630 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0803 18:03:51.553974    4630 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0803 18:03:51.558635    4630 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 18:03:51.639387    4630 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0803 18:03:51.677609    4630 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0803 18:03:51.677681    4630 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0803 18:03:51.680688    4630 start.go:563] Will wait 60s for crictl version
	I0803 18:03:51.680735    4630 ssh_runner.go:195] Run: which crictl
	I0803 18:03:51.682357    4630 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0803 18:03:51.696622    4630 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0803 18:03:51.696688    4630 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0803 18:03:51.712273    4630 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0803 18:03:51.733014    4630 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0803 18:03:51.733136    4630 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0803 18:03:51.734499    4630 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0803 18:03:51.737946    4630 kubeadm.go:883] updating cluster {Name:stopped-upgrade-413000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50497 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-413000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0803 18:03:51.737987    4630 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0803 18:03:51.738038    4630 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0803 18:03:51.748555    4630 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0803 18:03:51.748564    4630 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0803 18:03:51.748609    4630 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0803 18:03:51.752266    4630 ssh_runner.go:195] Run: which lz4
	I0803 18:03:51.753757    4630 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0803 18:03:51.754975    4630 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0803 18:03:51.754985    4630 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0803 18:03:52.655291    4630 docker.go:649] duration metric: took 901.592792ms to copy over tarball
	I0803 18:03:52.655350    4630 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0803 18:03:53.811514    4630 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.156183667s)
	I0803 18:03:53.811527    4630 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0803 18:03:53.827965    4630 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0803 18:03:53.831642    4630 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0803 18:03:53.836702    4630 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 18:03:53.914534    4630 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0803 18:03:55.403483    4630 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.48897675s)
	I0803 18:03:55.403584    4630 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0803 18:03:55.414350    4630 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0803 18:03:55.414362    4630 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0803 18:03:55.414368    4630 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0803 18:03:55.418677    4630 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0803 18:03:55.420396    4630 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0803 18:03:55.422492    4630 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0803 18:03:55.422630    4630 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0803 18:03:55.425471    4630 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0803 18:03:55.425698    4630 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0803 18:03:55.427158    4630 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0803 18:03:55.427275    4630 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0803 18:03:55.429012    4630 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0803 18:03:55.429012    4630 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0803 18:03:55.430188    4630 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0803 18:03:55.430281    4630 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0803 18:03:55.431296    4630 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0803 18:03:55.431391    4630 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0803 18:03:55.440342    4630 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0803 18:03:55.441425    4630 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0803 18:03:55.835161    4630 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0803 18:03:55.847213    4630 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0803 18:03:55.847235    4630 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0803 18:03:55.847297    4630 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0803 18:03:55.857291    4630 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0803 18:03:55.857376    4630 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0803 18:03:55.865291    4630 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0803 18:03:55.867788    4630 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0803 18:03:55.867807    4630 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0803 18:03:55.867847    4630 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0803 18:03:55.877506    4630 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0803 18:03:55.877531    4630 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0803 18:03:55.877583    4630 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0803 18:03:55.879750    4630 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0803 18:03:55.887718    4630 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0803 18:03:55.892314    4630 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	W0803 18:03:55.903095    4630 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0803 18:03:55.903239    4630 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0803 18:03:55.903784    4630 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0803 18:03:55.903810    4630 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0803 18:03:55.903836    4630 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0803 18:03:55.908601    4630 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0803 18:03:55.914791    4630 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0803 18:03:55.914815    4630 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0803 18:03:55.914865    4630 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0803 18:03:55.925374    4630 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0803 18:03:55.925497    4630 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0803 18:03:55.930570    4630 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0803 18:03:55.930590    4630 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0803 18:03:55.930639    4630 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0803 18:03:55.935452    4630 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0803 18:03:55.935570    4630 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0803 18:03:55.935574    4630 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0803 18:03:55.935590    4630 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0803 18:03:55.939896    4630 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0803 18:03:55.944607    4630 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0803 18:03:55.944626    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0803 18:03:55.947570    4630 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0803 18:03:55.947576    4630 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0803 18:03:55.947600    4630 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0803 18:03:55.978055    4630 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0803 18:03:55.978080    4630 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0803 18:03:55.978135    4630 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0803 18:03:56.016488    4630 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0803 18:03:56.016506    4630 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0803 18:03:56.016516    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0803 18:03:56.016543    4630 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0803 18:03:56.016647    4630 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	W0803 18:03:56.043667    4630 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0803 18:03:56.043780    4630 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0803 18:03:56.066294    4630 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0803 18:03:56.066338    4630 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0803 18:03:56.066364    4630 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0803 18:03:56.066382    4630 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0803 18:03:56.066404    4630 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0803 18:03:56.066448    4630 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0803 18:03:56.101893    4630 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0803 18:03:56.102031    4630 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0803 18:03:56.115081    4630 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0803 18:03:56.115119    4630 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0803 18:03:56.178518    4630 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0803 18:03:56.178534    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0803 18:03:56.525029    4630 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0803 18:03:56.525050    4630 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0803 18:03:56.525056    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0803 18:03:56.647005    4630 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0803 18:03:56.647047    4630 cache_images.go:92] duration metric: took 1.232707917s to LoadCachedImages
	W0803 18:03:56.647087    4630 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0803 18:03:56.647094    4630 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0803 18:03:56.647149    4630 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-413000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-413000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0803 18:03:56.647211    4630 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0803 18:03:56.660880    4630 cni.go:84] Creating CNI manager for ""
	I0803 18:03:56.660893    4630 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0803 18:03:56.660899    4630 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0803 18:03:56.660908    4630 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-413000 NodeName:stopped-upgrade-413000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0803 18:03:56.660980    4630 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-413000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0803 18:03:56.661037    4630 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0803 18:03:56.664476    4630 binaries.go:44] Found k8s binaries, skipping transfer
	I0803 18:03:56.664506    4630 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0803 18:03:56.667644    4630 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0803 18:03:56.672720    4630 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0803 18:03:56.677837    4630 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0803 18:03:56.683280    4630 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0803 18:03:56.684638    4630 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0803 18:03:56.688594    4630 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 18:03:56.775224    4630 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0803 18:03:56.780890    4630 certs.go:68] Setting up /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/stopped-upgrade-413000 for IP: 10.0.2.15
	I0803 18:03:56.780896    4630 certs.go:194] generating shared ca certs ...
	I0803 18:03:56.780905    4630 certs.go:226] acquiring lock for ca certs: {Name:mk4c6ee72dd2b768bec67e582e0b6b1af1b504e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 18:03:56.781068    4630 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19364-1166/.minikube/ca.key
	I0803 18:03:56.781125    4630 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19364-1166/.minikube/proxy-client-ca.key
	I0803 18:03:56.781130    4630 certs.go:256] generating profile certs ...
	I0803 18:03:56.781219    4630 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/stopped-upgrade-413000/client.key
	I0803 18:03:56.781235    4630 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/stopped-upgrade-413000/apiserver.key.19a19657
	I0803 18:03:56.781246    4630 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/stopped-upgrade-413000/apiserver.crt.19a19657 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0803 18:03:57.052023    4630 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/stopped-upgrade-413000/apiserver.crt.19a19657 ...
	I0803 18:03:57.052040    4630 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/stopped-upgrade-413000/apiserver.crt.19a19657: {Name:mkee3041379328624e4e79a515ed80df02ed59f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 18:03:57.052383    4630 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/stopped-upgrade-413000/apiserver.key.19a19657 ...
	I0803 18:03:57.052389    4630 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/stopped-upgrade-413000/apiserver.key.19a19657: {Name:mk7c5694bb8397d1fed4b6507c5be27e8fbc5792 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 18:03:57.052531    4630 certs.go:381] copying /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/stopped-upgrade-413000/apiserver.crt.19a19657 -> /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/stopped-upgrade-413000/apiserver.crt
	I0803 18:03:57.052689    4630 certs.go:385] copying /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/stopped-upgrade-413000/apiserver.key.19a19657 -> /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/stopped-upgrade-413000/apiserver.key
	I0803 18:03:57.052864    4630 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/stopped-upgrade-413000/proxy-client.key
	I0803 18:03:57.053008    4630 certs.go:484] found cert: /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/1673.pem (1338 bytes)
	W0803 18:03:57.053036    4630 certs.go:480] ignoring /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/1673_empty.pem, impossibly tiny 0 bytes
	I0803 18:03:57.053042    4630 certs.go:484] found cert: /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/ca-key.pem (1675 bytes)
	I0803 18:03:57.053069    4630 certs.go:484] found cert: /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/ca.pem (1082 bytes)
	I0803 18:03:57.053088    4630 certs.go:484] found cert: /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/cert.pem (1123 bytes)
	I0803 18:03:57.053106    4630 certs.go:484] found cert: /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/key.pem (1675 bytes)
	I0803 18:03:57.053144    4630 certs.go:484] found cert: /Users/jenkins/minikube-integration/19364-1166/.minikube/files/etc/ssl/certs/16732.pem (1708 bytes)
	I0803 18:03:57.053477    4630 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1166/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0803 18:03:57.061023    4630 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1166/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0803 18:03:57.068894    4630 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1166/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0803 18:03:57.076474    4630 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1166/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0803 18:03:57.084476    4630 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/stopped-upgrade-413000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0803 18:03:57.092794    4630 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/stopped-upgrade-413000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0803 18:03:57.100390    4630 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/stopped-upgrade-413000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0803 18:03:57.108274    4630 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/stopped-upgrade-413000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0803 18:03:57.116246    4630 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/1673.pem --> /usr/share/ca-certificates/1673.pem (1338 bytes)
	I0803 18:03:57.123977    4630 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1166/.minikube/files/etc/ssl/certs/16732.pem --> /usr/share/ca-certificates/16732.pem (1708 bytes)
	I0803 18:03:57.132337    4630 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1166/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0803 18:03:57.140330    4630 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0803 18:03:57.147878    4630 ssh_runner.go:195] Run: openssl version
	I0803 18:03:57.150325    4630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1673.pem && ln -fs /usr/share/ca-certificates/1673.pem /etc/ssl/certs/1673.pem"
	I0803 18:03:57.153597    4630 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1673.pem
	I0803 18:03:57.155236    4630 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  4 00:28 /usr/share/ca-certificates/1673.pem
	I0803 18:03:57.155264    4630 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1673.pem
	I0803 18:03:57.157345    4630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1673.pem /etc/ssl/certs/51391683.0"
	I0803 18:03:57.160795    4630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16732.pem && ln -fs /usr/share/ca-certificates/16732.pem /etc/ssl/certs/16732.pem"
	I0803 18:03:57.164724    4630 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16732.pem
	I0803 18:03:57.166552    4630 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  4 00:28 /usr/share/ca-certificates/16732.pem
	I0803 18:03:57.166598    4630 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16732.pem
	I0803 18:03:57.168584    4630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16732.pem /etc/ssl/certs/3ec20f2e.0"
	I0803 18:03:57.172476    4630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0803 18:03:57.176191    4630 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0803 18:03:57.177990    4630 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  4 00:21 /usr/share/ca-certificates/minikubeCA.pem
	I0803 18:03:57.178013    4630 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0803 18:03:57.179843    4630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0803 18:03:57.183119    4630 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0803 18:03:57.184664    4630 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0803 18:03:57.187537    4630 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0803 18:03:57.189710    4630 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0803 18:03:57.191751    4630 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0803 18:03:57.193723    4630 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0803 18:03:57.195492    4630 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0803 18:03:57.197473    4630 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-413000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50497 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-413000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0803 18:03:57.197561    4630 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0803 18:03:57.208393    4630 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0803 18:03:57.211895    4630 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0803 18:03:57.211903    4630 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0803 18:03:57.211945    4630 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0803 18:03:57.215440    4630 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0803 18:03:57.215756    4630 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-413000" does not appear in /Users/jenkins/minikube-integration/19364-1166/kubeconfig
	I0803 18:03:57.215859    4630 kubeconfig.go:62] /Users/jenkins/minikube-integration/19364-1166/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-413000" cluster setting kubeconfig missing "stopped-upgrade-413000" context setting]
	I0803 18:03:57.216082    4630 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19364-1166/kubeconfig: {Name:mk0a3c55e1982b2d92db1034b47f8334d27942c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 18:03:57.216522    4630 kapi.go:59] client config for stopped-upgrade-413000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/stopped-upgrade-413000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/stopped-upgrade-413000/client.key", CAFile:"/Users/jenkins/minikube-integration/19364-1166/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1019a01b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0803 18:03:57.216838    4630 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0803 18:03:57.220132    4630 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-413000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0803 18:03:57.220141    4630 kubeadm.go:1160] stopping kube-system containers ...
	I0803 18:03:57.220194    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0803 18:03:57.231543    4630 docker.go:483] Stopping containers: [a41ac171ebac 64391ce8f5a9 db60aaba5af7 eaff7d840b96 d4fb7551ff98 51278babd119 ca2ef152d64a 2fce2c3712d4]
	I0803 18:03:57.231620    4630 ssh_runner.go:195] Run: docker stop a41ac171ebac 64391ce8f5a9 db60aaba5af7 eaff7d840b96 d4fb7551ff98 51278babd119 ca2ef152d64a 2fce2c3712d4
	I0803 18:03:57.243772    4630 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0803 18:03:57.249560    4630 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0803 18:03:57.252852    4630 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0803 18:03:57.252861    4630 kubeadm.go:157] found existing configuration files:
	
	I0803 18:03:57.252889    4630 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50497 /etc/kubernetes/admin.conf
	I0803 18:03:57.256345    4630 kubeadm.go:163] "https://control-plane.minikube.internal:50497" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50497 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0803 18:03:57.256387    4630 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0803 18:03:57.259998    4630 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50497 /etc/kubernetes/kubelet.conf
	I0803 18:03:57.263351    4630 kubeadm.go:163] "https://control-plane.minikube.internal:50497" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50497 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0803 18:03:57.263392    4630 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0803 18:03:57.266312    4630 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50497 /etc/kubernetes/controller-manager.conf
	I0803 18:03:57.268998    4630 kubeadm.go:163] "https://control-plane.minikube.internal:50497" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50497 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0803 18:03:57.269036    4630 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0803 18:03:57.272506    4630 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50497 /etc/kubernetes/scheduler.conf
	I0803 18:03:57.275846    4630 kubeadm.go:163] "https://control-plane.minikube.internal:50497" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50497 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0803 18:03:57.275882    4630 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0803 18:03:57.279295    4630 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0803 18:03:57.282401    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0803 18:03:57.304578    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0803 18:03:57.943283    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0803 18:03:58.069700    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0803 18:03:58.095976    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0803 18:03:58.118053    4630 api_server.go:52] waiting for apiserver process to appear ...
	I0803 18:03:58.118134    4630 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 18:03:58.620209    4630 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 18:03:59.120203    4630 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 18:03:59.124523    4630 api_server.go:72] duration metric: took 1.006495708s to wait for apiserver process to appear ...
	I0803 18:03:59.124537    4630 api_server.go:88] waiting for apiserver healthz status ...
	I0803 18:03:59.124546    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:04:04.126550    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:04:04.126589    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:04:09.126818    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:04:09.126860    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:04:14.127211    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:04:14.127248    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:04:19.127727    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:04:19.127780    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:04:24.128504    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:04:24.128545    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:04:29.129411    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:04:29.129432    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:04:34.130546    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:04:34.130642    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:04:39.132535    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:04:39.132595    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:04:44.134801    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:04:44.134841    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:04:49.136994    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:04:49.137019    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:04:54.139094    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:04:54.139136    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:04:59.141300    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:04:59.141442    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:04:59.155297    4630 logs.go:276] 2 containers: [81ed1708fd6b db60aaba5af7]
	I0803 18:04:59.155381    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:04:59.166874    4630 logs.go:276] 2 containers: [09bbda970489 eaff7d840b96]
	I0803 18:04:59.166946    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:04:59.177366    4630 logs.go:276] 1 containers: [5f215627d79c]
	I0803 18:04:59.177429    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:04:59.187488    4630 logs.go:276] 2 containers: [135bcc7cf850 d4fb7551ff98]
	I0803 18:04:59.187562    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:04:59.198161    4630 logs.go:276] 1 containers: [6d2574ad3d0f]
	I0803 18:04:59.198225    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:04:59.208312    4630 logs.go:276] 2 containers: [718f7dff79a6 a41ac171ebac]
	I0803 18:04:59.208383    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:04:59.221100    4630 logs.go:276] 0 containers: []
	W0803 18:04:59.221110    4630 logs.go:278] No container was found matching "kindnet"
	I0803 18:04:59.221166    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:04:59.231274    4630 logs.go:276] 2 containers: [1431b39e7f18 1c409466f72c]
	I0803 18:04:59.231293    4630 logs.go:123] Gathering logs for kubelet ...
	I0803 18:04:59.231299    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 18:04:59.272448    4630 logs.go:123] Gathering logs for kube-apiserver [81ed1708fd6b] ...
	I0803 18:04:59.272458    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ed1708fd6b"
	I0803 18:04:59.289697    4630 logs.go:123] Gathering logs for kube-apiserver [db60aaba5af7] ...
	I0803 18:04:59.289708    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db60aaba5af7"
	I0803 18:04:59.330997    4630 logs.go:123] Gathering logs for coredns [5f215627d79c] ...
	I0803 18:04:59.331012    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f215627d79c"
	I0803 18:04:59.342458    4630 logs.go:123] Gathering logs for kube-scheduler [135bcc7cf850] ...
	I0803 18:04:59.342467    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 135bcc7cf850"
	I0803 18:04:59.360788    4630 logs.go:123] Gathering logs for storage-provisioner [1c409466f72c] ...
	I0803 18:04:59.360798    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c409466f72c"
	I0803 18:04:59.372294    4630 logs.go:123] Gathering logs for kube-scheduler [d4fb7551ff98] ...
	I0803 18:04:59.372304    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4fb7551ff98"
	I0803 18:04:59.391770    4630 logs.go:123] Gathering logs for Docker ...
	I0803 18:04:59.391781    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:04:59.415692    4630 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:04:59.415698    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:04:59.519994    4630 logs.go:123] Gathering logs for etcd [09bbda970489] ...
	I0803 18:04:59.520009    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09bbda970489"
	I0803 18:04:59.533917    4630 logs.go:123] Gathering logs for etcd [eaff7d840b96] ...
	I0803 18:04:59.533931    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaff7d840b96"
	I0803 18:04:59.549582    4630 logs.go:123] Gathering logs for kube-controller-manager [718f7dff79a6] ...
	I0803 18:04:59.549594    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 718f7dff79a6"
	I0803 18:04:59.567076    4630 logs.go:123] Gathering logs for dmesg ...
	I0803 18:04:59.567086    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:04:59.571335    4630 logs.go:123] Gathering logs for kube-proxy [6d2574ad3d0f] ...
	I0803 18:04:59.571345    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2574ad3d0f"
	I0803 18:04:59.587193    4630 logs.go:123] Gathering logs for kube-controller-manager [a41ac171ebac] ...
	I0803 18:04:59.587203    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41ac171ebac"
	I0803 18:04:59.600067    4630 logs.go:123] Gathering logs for storage-provisioner [1431b39e7f18] ...
	I0803 18:04:59.600078    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1431b39e7f18"
	I0803 18:04:59.621515    4630 logs.go:123] Gathering logs for container status ...
	I0803 18:04:59.621525    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:05:02.135416    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:05:07.136508    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:05:07.136654    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:05:07.149019    4630 logs.go:276] 2 containers: [81ed1708fd6b db60aaba5af7]
	I0803 18:05:07.149100    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:05:07.160582    4630 logs.go:276] 2 containers: [09bbda970489 eaff7d840b96]
	I0803 18:05:07.160654    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:05:07.171150    4630 logs.go:276] 1 containers: [5f215627d79c]
	I0803 18:05:07.171219    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:05:07.183768    4630 logs.go:276] 2 containers: [135bcc7cf850 d4fb7551ff98]
	I0803 18:05:07.183842    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:05:07.194761    4630 logs.go:276] 1 containers: [6d2574ad3d0f]
	I0803 18:05:07.194830    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:05:07.209739    4630 logs.go:276] 2 containers: [718f7dff79a6 a41ac171ebac]
	I0803 18:05:07.209801    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:05:07.219998    4630 logs.go:276] 0 containers: []
	W0803 18:05:07.220009    4630 logs.go:278] No container was found matching "kindnet"
	I0803 18:05:07.220070    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:05:07.230725    4630 logs.go:276] 2 containers: [1431b39e7f18 1c409466f72c]
	I0803 18:05:07.230740    4630 logs.go:123] Gathering logs for kube-proxy [6d2574ad3d0f] ...
	I0803 18:05:07.230745    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2574ad3d0f"
	I0803 18:05:07.242586    4630 logs.go:123] Gathering logs for kube-scheduler [d4fb7551ff98] ...
	I0803 18:05:07.242599    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4fb7551ff98"
	I0803 18:05:07.257186    4630 logs.go:123] Gathering logs for kube-controller-manager [718f7dff79a6] ...
	I0803 18:05:07.257195    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 718f7dff79a6"
	I0803 18:05:07.274679    4630 logs.go:123] Gathering logs for Docker ...
	I0803 18:05:07.274692    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:05:07.299972    4630 logs.go:123] Gathering logs for container status ...
	I0803 18:05:07.299979    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:05:07.311791    4630 logs.go:123] Gathering logs for kube-apiserver [81ed1708fd6b] ...
	I0803 18:05:07.311805    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ed1708fd6b"
	I0803 18:05:07.327095    4630 logs.go:123] Gathering logs for etcd [09bbda970489] ...
	I0803 18:05:07.327106    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09bbda970489"
	I0803 18:05:07.341720    4630 logs.go:123] Gathering logs for kube-scheduler [135bcc7cf850] ...
	I0803 18:05:07.341731    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 135bcc7cf850"
	I0803 18:05:07.355269    4630 logs.go:123] Gathering logs for etcd [eaff7d840b96] ...
	I0803 18:05:07.355280    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaff7d840b96"
	I0803 18:05:07.370070    4630 logs.go:123] Gathering logs for kube-controller-manager [a41ac171ebac] ...
	I0803 18:05:07.370079    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41ac171ebac"
	I0803 18:05:07.383760    4630 logs.go:123] Gathering logs for storage-provisioner [1431b39e7f18] ...
	I0803 18:05:07.383774    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1431b39e7f18"
	I0803 18:05:07.395271    4630 logs.go:123] Gathering logs for kube-apiserver [db60aaba5af7] ...
	I0803 18:05:07.395281    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db60aaba5af7"
	I0803 18:05:07.432293    4630 logs.go:123] Gathering logs for coredns [5f215627d79c] ...
	I0803 18:05:07.432303    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f215627d79c"
	I0803 18:05:07.443401    4630 logs.go:123] Gathering logs for storage-provisioner [1c409466f72c] ...
	I0803 18:05:07.443412    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c409466f72c"
	I0803 18:05:07.454335    4630 logs.go:123] Gathering logs for kubelet ...
	I0803 18:05:07.454350    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 18:05:07.490948    4630 logs.go:123] Gathering logs for dmesg ...
	I0803 18:05:07.490956    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:05:07.495038    4630 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:05:07.495046    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:05:10.035772    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:05:15.037866    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:05:15.038038    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:05:15.053669    4630 logs.go:276] 2 containers: [81ed1708fd6b db60aaba5af7]
	I0803 18:05:15.053754    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:05:15.066372    4630 logs.go:276] 2 containers: [09bbda970489 eaff7d840b96]
	I0803 18:05:15.066448    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:05:15.077259    4630 logs.go:276] 1 containers: [5f215627d79c]
	I0803 18:05:15.077331    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:05:15.087326    4630 logs.go:276] 2 containers: [135bcc7cf850 d4fb7551ff98]
	I0803 18:05:15.087397    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:05:15.097789    4630 logs.go:276] 1 containers: [6d2574ad3d0f]
	I0803 18:05:15.097856    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:05:15.108185    4630 logs.go:276] 2 containers: [718f7dff79a6 a41ac171ebac]
	I0803 18:05:15.108254    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:05:15.118573    4630 logs.go:276] 0 containers: []
	W0803 18:05:15.118584    4630 logs.go:278] No container was found matching "kindnet"
	I0803 18:05:15.118640    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:05:15.129001    4630 logs.go:276] 2 containers: [1431b39e7f18 1c409466f72c]
	I0803 18:05:15.129022    4630 logs.go:123] Gathering logs for kubelet ...
	I0803 18:05:15.129028    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 18:05:15.168352    4630 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:05:15.168364    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:05:15.209955    4630 logs.go:123] Gathering logs for etcd [eaff7d840b96] ...
	I0803 18:05:15.209966    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaff7d840b96"
	I0803 18:05:15.229909    4630 logs.go:123] Gathering logs for kube-scheduler [135bcc7cf850] ...
	I0803 18:05:15.229919    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 135bcc7cf850"
	I0803 18:05:15.243851    4630 logs.go:123] Gathering logs for kube-scheduler [d4fb7551ff98] ...
	I0803 18:05:15.243861    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4fb7551ff98"
	I0803 18:05:15.258588    4630 logs.go:123] Gathering logs for kube-controller-manager [718f7dff79a6] ...
	I0803 18:05:15.258598    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 718f7dff79a6"
	I0803 18:05:15.276362    4630 logs.go:123] Gathering logs for storage-provisioner [1431b39e7f18] ...
	I0803 18:05:15.276371    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1431b39e7f18"
	I0803 18:05:15.290003    4630 logs.go:123] Gathering logs for storage-provisioner [1c409466f72c] ...
	I0803 18:05:15.290017    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c409466f72c"
	I0803 18:05:15.301526    4630 logs.go:123] Gathering logs for dmesg ...
	I0803 18:05:15.301538    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:05:15.305563    4630 logs.go:123] Gathering logs for kube-proxy [6d2574ad3d0f] ...
	I0803 18:05:15.305572    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2574ad3d0f"
	I0803 18:05:15.316792    4630 logs.go:123] Gathering logs for kube-apiserver [81ed1708fd6b] ...
	I0803 18:05:15.316807    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ed1708fd6b"
	I0803 18:05:15.331240    4630 logs.go:123] Gathering logs for etcd [09bbda970489] ...
	I0803 18:05:15.331251    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09bbda970489"
	I0803 18:05:15.345499    4630 logs.go:123] Gathering logs for coredns [5f215627d79c] ...
	I0803 18:05:15.345509    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f215627d79c"
	I0803 18:05:15.356711    4630 logs.go:123] Gathering logs for kube-controller-manager [a41ac171ebac] ...
	I0803 18:05:15.356721    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41ac171ebac"
	I0803 18:05:15.368826    4630 logs.go:123] Gathering logs for kube-apiserver [db60aaba5af7] ...
	I0803 18:05:15.368836    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db60aaba5af7"
	I0803 18:05:15.405194    4630 logs.go:123] Gathering logs for Docker ...
	I0803 18:05:15.405205    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:05:15.428546    4630 logs.go:123] Gathering logs for container status ...
	I0803 18:05:15.428554    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:05:17.949899    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:05:22.952057    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:05:22.952214    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:05:22.968967    4630 logs.go:276] 2 containers: [81ed1708fd6b db60aaba5af7]
	I0803 18:05:22.969043    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:05:22.982021    4630 logs.go:276] 2 containers: [09bbda970489 eaff7d840b96]
	I0803 18:05:22.982090    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:05:22.992266    4630 logs.go:276] 1 containers: [5f215627d79c]
	I0803 18:05:22.992328    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:05:23.002918    4630 logs.go:276] 2 containers: [135bcc7cf850 d4fb7551ff98]
	I0803 18:05:23.002982    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:05:23.013423    4630 logs.go:276] 1 containers: [6d2574ad3d0f]
	I0803 18:05:23.013491    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:05:23.023401    4630 logs.go:276] 2 containers: [718f7dff79a6 a41ac171ebac]
	I0803 18:05:23.023468    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:05:23.033883    4630 logs.go:276] 0 containers: []
	W0803 18:05:23.033894    4630 logs.go:278] No container was found matching "kindnet"
	I0803 18:05:23.033947    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:05:23.044981    4630 logs.go:276] 2 containers: [1431b39e7f18 1c409466f72c]
	I0803 18:05:23.045000    4630 logs.go:123] Gathering logs for kube-apiserver [81ed1708fd6b] ...
	I0803 18:05:23.045010    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ed1708fd6b"
	I0803 18:05:23.059684    4630 logs.go:123] Gathering logs for etcd [09bbda970489] ...
	I0803 18:05:23.059694    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09bbda970489"
	I0803 18:05:23.074083    4630 logs.go:123] Gathering logs for storage-provisioner [1431b39e7f18] ...
	I0803 18:05:23.074093    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1431b39e7f18"
	I0803 18:05:23.085591    4630 logs.go:123] Gathering logs for container status ...
	I0803 18:05:23.085599    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:05:23.101132    4630 logs.go:123] Gathering logs for coredns [5f215627d79c] ...
	I0803 18:05:23.101146    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f215627d79c"
	I0803 18:05:23.112348    4630 logs.go:123] Gathering logs for kube-scheduler [135bcc7cf850] ...
	I0803 18:05:23.112360    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 135bcc7cf850"
	I0803 18:05:23.126544    4630 logs.go:123] Gathering logs for kube-controller-manager [a41ac171ebac] ...
	I0803 18:05:23.126553    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41ac171ebac"
	I0803 18:05:23.138857    4630 logs.go:123] Gathering logs for storage-provisioner [1c409466f72c] ...
	I0803 18:05:23.138868    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c409466f72c"
	I0803 18:05:23.150952    4630 logs.go:123] Gathering logs for kubelet ...
	I0803 18:05:23.150963    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 18:05:23.189754    4630 logs.go:123] Gathering logs for dmesg ...
	I0803 18:05:23.189764    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:05:23.193746    4630 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:05:23.193752    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:05:23.228734    4630 logs.go:123] Gathering logs for etcd [eaff7d840b96] ...
	I0803 18:05:23.228744    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaff7d840b96"
	I0803 18:05:23.243430    4630 logs.go:123] Gathering logs for Docker ...
	I0803 18:05:23.243439    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:05:23.269320    4630 logs.go:123] Gathering logs for kube-scheduler [d4fb7551ff98] ...
	I0803 18:05:23.269327    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4fb7551ff98"
	I0803 18:05:23.285321    4630 logs.go:123] Gathering logs for kube-proxy [6d2574ad3d0f] ...
	I0803 18:05:23.285336    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2574ad3d0f"
	I0803 18:05:23.296800    4630 logs.go:123] Gathering logs for kube-controller-manager [718f7dff79a6] ...
	I0803 18:05:23.296812    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 718f7dff79a6"
	I0803 18:05:23.314834    4630 logs.go:123] Gathering logs for kube-apiserver [db60aaba5af7] ...
	I0803 18:05:23.314845    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db60aaba5af7"
	I0803 18:05:25.854374    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:05:30.856589    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:05:30.856715    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:05:30.874364    4630 logs.go:276] 2 containers: [81ed1708fd6b db60aaba5af7]
	I0803 18:05:30.874457    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:05:30.886121    4630 logs.go:276] 2 containers: [09bbda970489 eaff7d840b96]
	I0803 18:05:30.886188    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:05:30.896931    4630 logs.go:276] 1 containers: [5f215627d79c]
	I0803 18:05:30.897003    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:05:30.907302    4630 logs.go:276] 2 containers: [135bcc7cf850 d4fb7551ff98]
	I0803 18:05:30.907374    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:05:30.917693    4630 logs.go:276] 1 containers: [6d2574ad3d0f]
	I0803 18:05:30.917754    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:05:30.932456    4630 logs.go:276] 2 containers: [718f7dff79a6 a41ac171ebac]
	I0803 18:05:30.932526    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:05:30.942962    4630 logs.go:276] 0 containers: []
	W0803 18:05:30.942973    4630 logs.go:278] No container was found matching "kindnet"
	I0803 18:05:30.943028    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:05:30.952915    4630 logs.go:276] 2 containers: [1431b39e7f18 1c409466f72c]
	I0803 18:05:30.952934    4630 logs.go:123] Gathering logs for dmesg ...
	I0803 18:05:30.952940    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:05:30.957077    4630 logs.go:123] Gathering logs for etcd [eaff7d840b96] ...
	I0803 18:05:30.957084    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaff7d840b96"
	I0803 18:05:30.971232    4630 logs.go:123] Gathering logs for kubelet ...
	I0803 18:05:30.971242    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 18:05:31.009634    4630 logs.go:123] Gathering logs for coredns [5f215627d79c] ...
	I0803 18:05:31.009645    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f215627d79c"
	I0803 18:05:31.021425    4630 logs.go:123] Gathering logs for Docker ...
	I0803 18:05:31.021439    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:05:31.046690    4630 logs.go:123] Gathering logs for kube-scheduler [d4fb7551ff98] ...
	I0803 18:05:31.046704    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4fb7551ff98"
	I0803 18:05:31.061612    4630 logs.go:123] Gathering logs for kube-controller-manager [718f7dff79a6] ...
	I0803 18:05:31.061623    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 718f7dff79a6"
	I0803 18:05:31.078652    4630 logs.go:123] Gathering logs for storage-provisioner [1431b39e7f18] ...
	I0803 18:05:31.078663    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1431b39e7f18"
	I0803 18:05:31.089880    4630 logs.go:123] Gathering logs for storage-provisioner [1c409466f72c] ...
	I0803 18:05:31.089893    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c409466f72c"
	I0803 18:05:31.101085    4630 logs.go:123] Gathering logs for kube-apiserver [81ed1708fd6b] ...
	I0803 18:05:31.101096    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ed1708fd6b"
	I0803 18:05:31.115773    4630 logs.go:123] Gathering logs for etcd [09bbda970489] ...
	I0803 18:05:31.115784    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09bbda970489"
	I0803 18:05:31.130377    4630 logs.go:123] Gathering logs for kube-scheduler [135bcc7cf850] ...
	I0803 18:05:31.130391    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 135bcc7cf850"
	I0803 18:05:31.145833    4630 logs.go:123] Gathering logs for kube-controller-manager [a41ac171ebac] ...
	I0803 18:05:31.145844    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41ac171ebac"
	I0803 18:05:31.159288    4630 logs.go:123] Gathering logs for container status ...
	I0803 18:05:31.159299    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:05:31.171586    4630 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:05:31.171600    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:05:31.209446    4630 logs.go:123] Gathering logs for kube-apiserver [db60aaba5af7] ...
	I0803 18:05:31.209457    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db60aaba5af7"
	I0803 18:05:31.249081    4630 logs.go:123] Gathering logs for kube-proxy [6d2574ad3d0f] ...
	I0803 18:05:31.249092    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2574ad3d0f"
	I0803 18:05:33.761978    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:05:38.764292    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0803 18:05:38.764486    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:05:38.786516    4630 logs.go:276] 2 containers: [81ed1708fd6b db60aaba5af7]
	I0803 18:05:38.786634    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:05:38.801631    4630 logs.go:276] 2 containers: [09bbda970489 eaff7d840b96]
	I0803 18:05:38.801710    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:05:38.818728    4630 logs.go:276] 1 containers: [5f215627d79c]
	I0803 18:05:38.818799    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:05:38.829400    4630 logs.go:276] 2 containers: [135bcc7cf850 d4fb7551ff98]
	I0803 18:05:38.829480    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:05:38.844102    4630 logs.go:276] 1 containers: [6d2574ad3d0f]
	I0803 18:05:38.844176    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:05:38.854285    4630 logs.go:276] 2 containers: [718f7dff79a6 a41ac171ebac]
	I0803 18:05:38.854355    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:05:38.864451    4630 logs.go:276] 0 containers: []
	W0803 18:05:38.864464    4630 logs.go:278] No container was found matching "kindnet"
	I0803 18:05:38.864521    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:05:38.875095    4630 logs.go:276] 2 containers: [1431b39e7f18 1c409466f72c]
	I0803 18:05:38.875116    4630 logs.go:123] Gathering logs for kubelet ...
	I0803 18:05:38.875120    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 18:05:38.914219    4630 logs.go:123] Gathering logs for kube-scheduler [135bcc7cf850] ...
	I0803 18:05:38.914225    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 135bcc7cf850"
	I0803 18:05:38.928109    4630 logs.go:123] Gathering logs for kube-proxy [6d2574ad3d0f] ...
	I0803 18:05:38.928119    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2574ad3d0f"
	I0803 18:05:38.939989    4630 logs.go:123] Gathering logs for Docker ...
	I0803 18:05:38.939999    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:05:38.965474    4630 logs.go:123] Gathering logs for container status ...
	I0803 18:05:38.965484    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:05:38.977578    4630 logs.go:123] Gathering logs for dmesg ...
	I0803 18:05:38.977589    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:05:38.981617    4630 logs.go:123] Gathering logs for kube-apiserver [db60aaba5af7] ...
	I0803 18:05:38.981623    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db60aaba5af7"
	I0803 18:05:39.018752    4630 logs.go:123] Gathering logs for etcd [eaff7d840b96] ...
	I0803 18:05:39.018765    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaff7d840b96"
	I0803 18:05:39.033478    4630 logs.go:123] Gathering logs for kube-scheduler [d4fb7551ff98] ...
	I0803 18:05:39.033488    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4fb7551ff98"
	I0803 18:05:39.048155    4630 logs.go:123] Gathering logs for kube-controller-manager [718f7dff79a6] ...
	I0803 18:05:39.048166    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 718f7dff79a6"
	I0803 18:05:39.065802    4630 logs.go:123] Gathering logs for storage-provisioner [1431b39e7f18] ...
	I0803 18:05:39.065813    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1431b39e7f18"
	I0803 18:05:39.077593    4630 logs.go:123] Gathering logs for storage-provisioner [1c409466f72c] ...
	I0803 18:05:39.077603    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c409466f72c"
	I0803 18:05:39.088517    4630 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:05:39.088527    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:05:39.123004    4630 logs.go:123] Gathering logs for etcd [09bbda970489] ...
	I0803 18:05:39.123015    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09bbda970489"
	I0803 18:05:39.141475    4630 logs.go:123] Gathering logs for coredns [5f215627d79c] ...
	I0803 18:05:39.141487    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f215627d79c"
	I0803 18:05:39.153556    4630 logs.go:123] Gathering logs for kube-apiserver [81ed1708fd6b] ...
	I0803 18:05:39.153567    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ed1708fd6b"
	I0803 18:05:39.167956    4630 logs.go:123] Gathering logs for kube-controller-manager [a41ac171ebac] ...
	I0803 18:05:39.167969    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41ac171ebac"
	I0803 18:05:41.680972    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:05:46.683455    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:05:46.683620    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:05:46.699560    4630 logs.go:276] 2 containers: [81ed1708fd6b db60aaba5af7]
	I0803 18:05:46.699649    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:05:46.711683    4630 logs.go:276] 2 containers: [09bbda970489 eaff7d840b96]
	I0803 18:05:46.711759    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:05:46.722572    4630 logs.go:276] 1 containers: [5f215627d79c]
	I0803 18:05:46.722638    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:05:46.733707    4630 logs.go:276] 2 containers: [135bcc7cf850 d4fb7551ff98]
	I0803 18:05:46.733773    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:05:46.744528    4630 logs.go:276] 1 containers: [6d2574ad3d0f]
	I0803 18:05:46.744599    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:05:46.756236    4630 logs.go:276] 2 containers: [718f7dff79a6 a41ac171ebac]
	I0803 18:05:46.756305    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:05:46.766923    4630 logs.go:276] 0 containers: []
	W0803 18:05:46.766936    4630 logs.go:278] No container was found matching "kindnet"
	I0803 18:05:46.766998    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:05:46.778023    4630 logs.go:276] 2 containers: [1431b39e7f18 1c409466f72c]
	I0803 18:05:46.778043    4630 logs.go:123] Gathering logs for etcd [eaff7d840b96] ...
	I0803 18:05:46.778049    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaff7d840b96"
	I0803 18:05:46.792744    4630 logs.go:123] Gathering logs for kube-scheduler [135bcc7cf850] ...
	I0803 18:05:46.792754    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 135bcc7cf850"
	I0803 18:05:46.806444    4630 logs.go:123] Gathering logs for kube-proxy [6d2574ad3d0f] ...
	I0803 18:05:46.806454    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2574ad3d0f"
	I0803 18:05:46.819919    4630 logs.go:123] Gathering logs for kube-controller-manager [a41ac171ebac] ...
	I0803 18:05:46.819929    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41ac171ebac"
	I0803 18:05:46.832828    4630 logs.go:123] Gathering logs for container status ...
	I0803 18:05:46.832843    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:05:46.845282    4630 logs.go:123] Gathering logs for kube-apiserver [db60aaba5af7] ...
	I0803 18:05:46.845292    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db60aaba5af7"
	I0803 18:05:46.889061    4630 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:05:46.889075    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:05:46.927456    4630 logs.go:123] Gathering logs for kube-apiserver [81ed1708fd6b] ...
	I0803 18:05:46.927474    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ed1708fd6b"
	I0803 18:05:46.945151    4630 logs.go:123] Gathering logs for etcd [09bbda970489] ...
	I0803 18:05:46.945163    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09bbda970489"
	I0803 18:05:46.959899    4630 logs.go:123] Gathering logs for kube-scheduler [d4fb7551ff98] ...
	I0803 18:05:46.959912    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4fb7551ff98"
	I0803 18:05:46.975939    4630 logs.go:123] Gathering logs for storage-provisioner [1431b39e7f18] ...
	I0803 18:05:46.975954    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1431b39e7f18"
	I0803 18:05:46.989298    4630 logs.go:123] Gathering logs for kubelet ...
	I0803 18:05:46.989309    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 18:05:47.029862    4630 logs.go:123] Gathering logs for coredns [5f215627d79c] ...
	I0803 18:05:47.029874    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f215627d79c"
	I0803 18:05:47.041313    4630 logs.go:123] Gathering logs for kube-controller-manager [718f7dff79a6] ...
	I0803 18:05:47.041325    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 718f7dff79a6"
	I0803 18:05:47.059547    4630 logs.go:123] Gathering logs for dmesg ...
	I0803 18:05:47.059557    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:05:47.064300    4630 logs.go:123] Gathering logs for Docker ...
	I0803 18:05:47.064307    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:05:47.088081    4630 logs.go:123] Gathering logs for storage-provisioner [1c409466f72c] ...
	I0803 18:05:47.088090    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c409466f72c"
	I0803 18:05:49.601442    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:05:54.603692    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:05:54.603912    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:05:54.623153    4630 logs.go:276] 2 containers: [81ed1708fd6b db60aaba5af7]
	I0803 18:05:54.623251    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:05:54.637517    4630 logs.go:276] 2 containers: [09bbda970489 eaff7d840b96]
	I0803 18:05:54.637601    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:05:54.651727    4630 logs.go:276] 1 containers: [5f215627d79c]
	I0803 18:05:54.651805    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:05:54.662833    4630 logs.go:276] 2 containers: [135bcc7cf850 d4fb7551ff98]
	I0803 18:05:54.662909    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:05:54.673388    4630 logs.go:276] 1 containers: [6d2574ad3d0f]
	I0803 18:05:54.673455    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:05:54.684024    4630 logs.go:276] 2 containers: [718f7dff79a6 a41ac171ebac]
	I0803 18:05:54.684093    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:05:54.694946    4630 logs.go:276] 0 containers: []
	W0803 18:05:54.694958    4630 logs.go:278] No container was found matching "kindnet"
	I0803 18:05:54.695018    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:05:54.705633    4630 logs.go:276] 2 containers: [1431b39e7f18 1c409466f72c]
	I0803 18:05:54.705654    4630 logs.go:123] Gathering logs for kubelet ...
	I0803 18:05:54.705660    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 18:05:54.744973    4630 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:05:54.744980    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:05:54.781718    4630 logs.go:123] Gathering logs for kube-scheduler [d4fb7551ff98] ...
	I0803 18:05:54.781734    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4fb7551ff98"
	I0803 18:05:54.796955    4630 logs.go:123] Gathering logs for kube-proxy [6d2574ad3d0f] ...
	I0803 18:05:54.796969    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2574ad3d0f"
	I0803 18:05:54.808912    4630 logs.go:123] Gathering logs for Docker ...
	I0803 18:05:54.808924    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:05:54.833325    4630 logs.go:123] Gathering logs for coredns [5f215627d79c] ...
	I0803 18:05:54.833335    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f215627d79c"
	I0803 18:05:54.845157    4630 logs.go:123] Gathering logs for kube-controller-manager [a41ac171ebac] ...
	I0803 18:05:54.845172    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41ac171ebac"
	I0803 18:05:54.857688    4630 logs.go:123] Gathering logs for storage-provisioner [1431b39e7f18] ...
	I0803 18:05:54.857700    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1431b39e7f18"
	I0803 18:05:54.869451    4630 logs.go:123] Gathering logs for dmesg ...
	I0803 18:05:54.869461    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:05:54.873515    4630 logs.go:123] Gathering logs for kube-apiserver [81ed1708fd6b] ...
	I0803 18:05:54.873525    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ed1708fd6b"
	I0803 18:05:54.887228    4630 logs.go:123] Gathering logs for etcd [09bbda970489] ...
	I0803 18:05:54.887237    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09bbda970489"
	I0803 18:05:54.901863    4630 logs.go:123] Gathering logs for etcd [eaff7d840b96] ...
	I0803 18:05:54.901873    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaff7d840b96"
	I0803 18:05:54.916282    4630 logs.go:123] Gathering logs for kube-controller-manager [718f7dff79a6] ...
	I0803 18:05:54.916293    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 718f7dff79a6"
	I0803 18:05:54.933655    4630 logs.go:123] Gathering logs for kube-apiserver [db60aaba5af7] ...
	I0803 18:05:54.933665    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db60aaba5af7"
	I0803 18:05:54.972013    4630 logs.go:123] Gathering logs for kube-scheduler [135bcc7cf850] ...
	I0803 18:05:54.972023    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 135bcc7cf850"
	I0803 18:05:54.989125    4630 logs.go:123] Gathering logs for storage-provisioner [1c409466f72c] ...
	I0803 18:05:54.989138    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c409466f72c"
	I0803 18:05:55.000783    4630 logs.go:123] Gathering logs for container status ...
	I0803 18:05:55.000798    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:05:57.514510    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:06:02.516696    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:06:02.516818    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:06:02.528864    4630 logs.go:276] 2 containers: [81ed1708fd6b db60aaba5af7]
	I0803 18:06:02.528943    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:06:02.539712    4630 logs.go:276] 2 containers: [09bbda970489 eaff7d840b96]
	I0803 18:06:02.539790    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:06:02.550872    4630 logs.go:276] 1 containers: [5f215627d79c]
	I0803 18:06:02.550942    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:06:02.561767    4630 logs.go:276] 2 containers: [135bcc7cf850 d4fb7551ff98]
	I0803 18:06:02.561838    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:06:02.572235    4630 logs.go:276] 1 containers: [6d2574ad3d0f]
	I0803 18:06:02.572300    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:06:02.583015    4630 logs.go:276] 2 containers: [718f7dff79a6 a41ac171ebac]
	I0803 18:06:02.583086    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:06:02.593246    4630 logs.go:276] 0 containers: []
	W0803 18:06:02.593257    4630 logs.go:278] No container was found matching "kindnet"
	I0803 18:06:02.593311    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:06:02.603919    4630 logs.go:276] 2 containers: [1431b39e7f18 1c409466f72c]
	I0803 18:06:02.603936    4630 logs.go:123] Gathering logs for etcd [09bbda970489] ...
	I0803 18:06:02.603941    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09bbda970489"
	I0803 18:06:02.618279    4630 logs.go:123] Gathering logs for kube-scheduler [d4fb7551ff98] ...
	I0803 18:06:02.618291    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4fb7551ff98"
	I0803 18:06:02.632833    4630 logs.go:123] Gathering logs for kube-controller-manager [718f7dff79a6] ...
	I0803 18:06:02.632842    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 718f7dff79a6"
	I0803 18:06:02.651678    4630 logs.go:123] Gathering logs for container status ...
	I0803 18:06:02.651695    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:06:02.664089    4630 logs.go:123] Gathering logs for kube-proxy [6d2574ad3d0f] ...
	I0803 18:06:02.664102    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2574ad3d0f"
	I0803 18:06:02.676465    4630 logs.go:123] Gathering logs for storage-provisioner [1c409466f72c] ...
	I0803 18:06:02.676479    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c409466f72c"
	I0803 18:06:02.687954    4630 logs.go:123] Gathering logs for dmesg ...
	I0803 18:06:02.687969    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:06:02.692690    4630 logs.go:123] Gathering logs for kube-apiserver [db60aaba5af7] ...
	I0803 18:06:02.692699    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db60aaba5af7"
	I0803 18:06:02.735163    4630 logs.go:123] Gathering logs for kube-scheduler [135bcc7cf850] ...
	I0803 18:06:02.735172    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 135bcc7cf850"
	I0803 18:06:02.749961    4630 logs.go:123] Gathering logs for storage-provisioner [1431b39e7f18] ...
	I0803 18:06:02.749975    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1431b39e7f18"
	I0803 18:06:02.761397    4630 logs.go:123] Gathering logs for Docker ...
	I0803 18:06:02.761413    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:06:02.787178    4630 logs.go:123] Gathering logs for kubelet ...
	I0803 18:06:02.787185    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 18:06:02.826413    4630 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:06:02.826420    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:06:02.861281    4630 logs.go:123] Gathering logs for kube-apiserver [81ed1708fd6b] ...
	I0803 18:06:02.861295    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ed1708fd6b"
	I0803 18:06:02.875611    4630 logs.go:123] Gathering logs for etcd [eaff7d840b96] ...
	I0803 18:06:02.875626    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaff7d840b96"
	I0803 18:06:02.889530    4630 logs.go:123] Gathering logs for coredns [5f215627d79c] ...
	I0803 18:06:02.889540    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f215627d79c"
	I0803 18:06:02.901005    4630 logs.go:123] Gathering logs for kube-controller-manager [a41ac171ebac] ...
	I0803 18:06:02.901015    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41ac171ebac"
	I0803 18:06:05.415716    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:06:10.417971    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:06:10.418142    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:06:10.434850    4630 logs.go:276] 2 containers: [81ed1708fd6b db60aaba5af7]
	I0803 18:06:10.434928    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:06:10.446032    4630 logs.go:276] 2 containers: [09bbda970489 eaff7d840b96]
	I0803 18:06:10.446100    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:06:10.456479    4630 logs.go:276] 1 containers: [5f215627d79c]
	I0803 18:06:10.456550    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:06:10.467318    4630 logs.go:276] 2 containers: [135bcc7cf850 d4fb7551ff98]
	I0803 18:06:10.467391    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:06:10.477888    4630 logs.go:276] 1 containers: [6d2574ad3d0f]
	I0803 18:06:10.477953    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:06:10.489377    4630 logs.go:276] 2 containers: [718f7dff79a6 a41ac171ebac]
	I0803 18:06:10.489450    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:06:10.500380    4630 logs.go:276] 0 containers: []
	W0803 18:06:10.500392    4630 logs.go:278] No container was found matching "kindnet"
	I0803 18:06:10.500448    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:06:10.510780    4630 logs.go:276] 2 containers: [1431b39e7f18 1c409466f72c]
	I0803 18:06:10.510797    4630 logs.go:123] Gathering logs for dmesg ...
	I0803 18:06:10.510803    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:06:10.515953    4630 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:06:10.515959    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:06:10.551717    4630 logs.go:123] Gathering logs for kube-apiserver [81ed1708fd6b] ...
	I0803 18:06:10.551728    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ed1708fd6b"
	I0803 18:06:10.566107    4630 logs.go:123] Gathering logs for kube-apiserver [db60aaba5af7] ...
	I0803 18:06:10.566120    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db60aaba5af7"
	I0803 18:06:10.604175    4630 logs.go:123] Gathering logs for storage-provisioner [1431b39e7f18] ...
	I0803 18:06:10.604187    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1431b39e7f18"
	I0803 18:06:10.616658    4630 logs.go:123] Gathering logs for kubelet ...
	I0803 18:06:10.616670    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 18:06:10.654316    4630 logs.go:123] Gathering logs for etcd [eaff7d840b96] ...
	I0803 18:06:10.654327    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaff7d840b96"
	I0803 18:06:10.673033    4630 logs.go:123] Gathering logs for kube-proxy [6d2574ad3d0f] ...
	I0803 18:06:10.673043    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2574ad3d0f"
	I0803 18:06:10.684905    4630 logs.go:123] Gathering logs for storage-provisioner [1c409466f72c] ...
	I0803 18:06:10.684921    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c409466f72c"
	I0803 18:06:10.696347    4630 logs.go:123] Gathering logs for coredns [5f215627d79c] ...
	I0803 18:06:10.696359    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f215627d79c"
	I0803 18:06:10.708106    4630 logs.go:123] Gathering logs for kube-scheduler [135bcc7cf850] ...
	I0803 18:06:10.708116    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 135bcc7cf850"
	I0803 18:06:10.722669    4630 logs.go:123] Gathering logs for kube-scheduler [d4fb7551ff98] ...
	I0803 18:06:10.722681    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4fb7551ff98"
	I0803 18:06:10.736996    4630 logs.go:123] Gathering logs for kube-controller-manager [718f7dff79a6] ...
	I0803 18:06:10.737006    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 718f7dff79a6"
	I0803 18:06:10.761261    4630 logs.go:123] Gathering logs for kube-controller-manager [a41ac171ebac] ...
	I0803 18:06:10.761271    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41ac171ebac"
	I0803 18:06:10.773798    4630 logs.go:123] Gathering logs for Docker ...
	I0803 18:06:10.773808    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:06:10.797058    4630 logs.go:123] Gathering logs for etcd [09bbda970489] ...
	I0803 18:06:10.797066    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09bbda970489"
	I0803 18:06:10.812423    4630 logs.go:123] Gathering logs for container status ...
	I0803 18:06:10.812436    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:06:13.331571    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:06:18.333700    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:06:18.333836    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:06:18.345536    4630 logs.go:276] 2 containers: [81ed1708fd6b db60aaba5af7]
	I0803 18:06:18.345610    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:06:18.356252    4630 logs.go:276] 2 containers: [09bbda970489 eaff7d840b96]
	I0803 18:06:18.356330    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:06:18.366638    4630 logs.go:276] 1 containers: [5f215627d79c]
	I0803 18:06:18.366710    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:06:18.377143    4630 logs.go:276] 2 containers: [135bcc7cf850 d4fb7551ff98]
	I0803 18:06:18.377213    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:06:18.387546    4630 logs.go:276] 1 containers: [6d2574ad3d0f]
	I0803 18:06:18.387620    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:06:18.397967    4630 logs.go:276] 2 containers: [718f7dff79a6 a41ac171ebac]
	I0803 18:06:18.398037    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:06:18.408462    4630 logs.go:276] 0 containers: []
	W0803 18:06:18.408472    4630 logs.go:278] No container was found matching "kindnet"
	I0803 18:06:18.408525    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:06:18.418788    4630 logs.go:276] 2 containers: [1431b39e7f18 1c409466f72c]
	I0803 18:06:18.418806    4630 logs.go:123] Gathering logs for kube-apiserver [db60aaba5af7] ...
	I0803 18:06:18.418810    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db60aaba5af7"
	I0803 18:06:18.456011    4630 logs.go:123] Gathering logs for etcd [09bbda970489] ...
	I0803 18:06:18.456022    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09bbda970489"
	I0803 18:06:18.470295    4630 logs.go:123] Gathering logs for kube-controller-manager [718f7dff79a6] ...
	I0803 18:06:18.470308    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 718f7dff79a6"
	I0803 18:06:18.487293    4630 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:06:18.487303    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:06:18.524623    4630 logs.go:123] Gathering logs for dmesg ...
	I0803 18:06:18.524633    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:06:18.528815    4630 logs.go:123] Gathering logs for kube-apiserver [81ed1708fd6b] ...
	I0803 18:06:18.528822    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ed1708fd6b"
	I0803 18:06:18.543565    4630 logs.go:123] Gathering logs for kube-scheduler [d4fb7551ff98] ...
	I0803 18:06:18.543574    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4fb7551ff98"
	I0803 18:06:18.563527    4630 logs.go:123] Gathering logs for storage-provisioner [1431b39e7f18] ...
	I0803 18:06:18.563539    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1431b39e7f18"
	I0803 18:06:18.575243    4630 logs.go:123] Gathering logs for storage-provisioner [1c409466f72c] ...
	I0803 18:06:18.575253    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c409466f72c"
	I0803 18:06:18.590407    4630 logs.go:123] Gathering logs for container status ...
	I0803 18:06:18.590420    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:06:18.603956    4630 logs.go:123] Gathering logs for kubelet ...
	I0803 18:06:18.603969    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 18:06:18.644415    4630 logs.go:123] Gathering logs for coredns [5f215627d79c] ...
	I0803 18:06:18.644425    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f215627d79c"
	I0803 18:06:18.658056    4630 logs.go:123] Gathering logs for kube-proxy [6d2574ad3d0f] ...
	I0803 18:06:18.658069    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2574ad3d0f"
	I0803 18:06:18.670092    4630 logs.go:123] Gathering logs for Docker ...
	I0803 18:06:18.670104    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:06:18.694138    4630 logs.go:123] Gathering logs for etcd [eaff7d840b96] ...
	I0803 18:06:18.694150    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaff7d840b96"
	I0803 18:06:18.708255    4630 logs.go:123] Gathering logs for kube-controller-manager [a41ac171ebac] ...
	I0803 18:06:18.708265    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41ac171ebac"
	I0803 18:06:18.721388    4630 logs.go:123] Gathering logs for kube-scheduler [135bcc7cf850] ...
	I0803 18:06:18.721400    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 135bcc7cf850"
	I0803 18:06:21.236365    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:06:26.238438    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:06:26.238589    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:06:26.252605    4630 logs.go:276] 2 containers: [81ed1708fd6b db60aaba5af7]
	I0803 18:06:26.252678    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:06:26.263993    4630 logs.go:276] 2 containers: [09bbda970489 eaff7d840b96]
	I0803 18:06:26.264060    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:06:26.277942    4630 logs.go:276] 1 containers: [5f215627d79c]
	I0803 18:06:26.278006    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:06:26.289310    4630 logs.go:276] 2 containers: [135bcc7cf850 d4fb7551ff98]
	I0803 18:06:26.289375    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:06:26.299892    4630 logs.go:276] 1 containers: [6d2574ad3d0f]
	I0803 18:06:26.299957    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:06:26.310708    4630 logs.go:276] 2 containers: [718f7dff79a6 a41ac171ebac]
	I0803 18:06:26.310769    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:06:26.320948    4630 logs.go:276] 0 containers: []
	W0803 18:06:26.320958    4630 logs.go:278] No container was found matching "kindnet"
	I0803 18:06:26.321013    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:06:26.331366    4630 logs.go:276] 2 containers: [1431b39e7f18 1c409466f72c]
	I0803 18:06:26.331383    4630 logs.go:123] Gathering logs for etcd [eaff7d840b96] ...
	I0803 18:06:26.331388    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaff7d840b96"
	I0803 18:06:26.345864    4630 logs.go:123] Gathering logs for kube-proxy [6d2574ad3d0f] ...
	I0803 18:06:26.345875    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2574ad3d0f"
	I0803 18:06:26.357593    4630 logs.go:123] Gathering logs for kube-controller-manager [718f7dff79a6] ...
	I0803 18:06:26.357603    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 718f7dff79a6"
	I0803 18:06:26.375558    4630 logs.go:123] Gathering logs for storage-provisioner [1431b39e7f18] ...
	I0803 18:06:26.375570    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1431b39e7f18"
	I0803 18:06:26.388617    4630 logs.go:123] Gathering logs for etcd [09bbda970489] ...
	I0803 18:06:26.388628    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09bbda970489"
	I0803 18:06:26.401943    4630 logs.go:123] Gathering logs for kube-scheduler [135bcc7cf850] ...
	I0803 18:06:26.401954    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 135bcc7cf850"
	I0803 18:06:26.415609    4630 logs.go:123] Gathering logs for storage-provisioner [1c409466f72c] ...
	I0803 18:06:26.415620    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c409466f72c"
	I0803 18:06:26.426973    4630 logs.go:123] Gathering logs for container status ...
	I0803 18:06:26.426985    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:06:26.438966    4630 logs.go:123] Gathering logs for dmesg ...
	I0803 18:06:26.438976    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:06:26.445076    4630 logs.go:123] Gathering logs for coredns [5f215627d79c] ...
	I0803 18:06:26.445082    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f215627d79c"
	I0803 18:06:26.456046    4630 logs.go:123] Gathering logs for kube-scheduler [d4fb7551ff98] ...
	I0803 18:06:26.456057    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4fb7551ff98"
	I0803 18:06:26.471167    4630 logs.go:123] Gathering logs for kube-controller-manager [a41ac171ebac] ...
	I0803 18:06:26.471180    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41ac171ebac"
	I0803 18:06:26.483554    4630 logs.go:123] Gathering logs for kube-apiserver [81ed1708fd6b] ...
	I0803 18:06:26.483568    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ed1708fd6b"
	I0803 18:06:26.498968    4630 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:06:26.498981    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:06:26.533879    4630 logs.go:123] Gathering logs for kube-apiserver [db60aaba5af7] ...
	I0803 18:06:26.533890    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db60aaba5af7"
	I0803 18:06:26.572560    4630 logs.go:123] Gathering logs for Docker ...
	I0803 18:06:26.572574    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:06:26.595754    4630 logs.go:123] Gathering logs for kubelet ...
	I0803 18:06:26.595761    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 18:06:29.135290    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:06:34.137367    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:06:34.137575    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:06:34.158817    4630 logs.go:276] 2 containers: [81ed1708fd6b db60aaba5af7]
	I0803 18:06:34.158925    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:06:34.175041    4630 logs.go:276] 2 containers: [09bbda970489 eaff7d840b96]
	I0803 18:06:34.175124    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:06:34.188614    4630 logs.go:276] 1 containers: [5f215627d79c]
	I0803 18:06:34.188687    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:06:34.199545    4630 logs.go:276] 2 containers: [135bcc7cf850 d4fb7551ff98]
	I0803 18:06:34.199619    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:06:34.210272    4630 logs.go:276] 1 containers: [6d2574ad3d0f]
	I0803 18:06:34.210341    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:06:34.220619    4630 logs.go:276] 2 containers: [718f7dff79a6 a41ac171ebac]
	I0803 18:06:34.220681    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:06:34.230763    4630 logs.go:276] 0 containers: []
	W0803 18:06:34.230777    4630 logs.go:278] No container was found matching "kindnet"
	I0803 18:06:34.230836    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:06:34.249198    4630 logs.go:276] 2 containers: [1431b39e7f18 1c409466f72c]
	I0803 18:06:34.249220    4630 logs.go:123] Gathering logs for coredns [5f215627d79c] ...
	I0803 18:06:34.249226    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f215627d79c"
	I0803 18:06:34.261348    4630 logs.go:123] Gathering logs for kube-scheduler [d4fb7551ff98] ...
	I0803 18:06:34.261360    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4fb7551ff98"
	I0803 18:06:34.281589    4630 logs.go:123] Gathering logs for kube-controller-manager [a41ac171ebac] ...
	I0803 18:06:34.281600    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41ac171ebac"
	I0803 18:06:34.306734    4630 logs.go:123] Gathering logs for storage-provisioner [1431b39e7f18] ...
	I0803 18:06:34.306749    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1431b39e7f18"
	I0803 18:06:34.321817    4630 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:06:34.321828    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:06:34.356929    4630 logs.go:123] Gathering logs for etcd [09bbda970489] ...
	I0803 18:06:34.356944    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09bbda970489"
	I0803 18:06:34.370737    4630 logs.go:123] Gathering logs for kube-proxy [6d2574ad3d0f] ...
	I0803 18:06:34.370749    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2574ad3d0f"
	I0803 18:06:34.382831    4630 logs.go:123] Gathering logs for container status ...
	I0803 18:06:34.382845    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:06:34.394844    4630 logs.go:123] Gathering logs for kubelet ...
	I0803 18:06:34.394855    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 18:06:34.434772    4630 logs.go:123] Gathering logs for dmesg ...
	I0803 18:06:34.434782    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:06:34.438939    4630 logs.go:123] Gathering logs for kube-apiserver [81ed1708fd6b] ...
	I0803 18:06:34.438947    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ed1708fd6b"
	I0803 18:06:34.453134    4630 logs.go:123] Gathering logs for storage-provisioner [1c409466f72c] ...
	I0803 18:06:34.453144    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c409466f72c"
	I0803 18:06:34.464266    4630 logs.go:123] Gathering logs for kube-apiserver [db60aaba5af7] ...
	I0803 18:06:34.464278    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db60aaba5af7"
	I0803 18:06:34.500916    4630 logs.go:123] Gathering logs for etcd [eaff7d840b96] ...
	I0803 18:06:34.500926    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaff7d840b96"
	I0803 18:06:34.515498    4630 logs.go:123] Gathering logs for kube-scheduler [135bcc7cf850] ...
	I0803 18:06:34.515507    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 135bcc7cf850"
	I0803 18:06:34.533678    4630 logs.go:123] Gathering logs for kube-controller-manager [718f7dff79a6] ...
	I0803 18:06:34.533692    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 718f7dff79a6"
	I0803 18:06:34.554691    4630 logs.go:123] Gathering logs for Docker ...
	I0803 18:06:34.554705    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:06:37.079788    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:06:42.081916    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:06:42.082164    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:06:42.102991    4630 logs.go:276] 2 containers: [81ed1708fd6b db60aaba5af7]
	I0803 18:06:42.103086    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:06:42.122623    4630 logs.go:276] 2 containers: [09bbda970489 eaff7d840b96]
	I0803 18:06:42.122701    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:06:42.134466    4630 logs.go:276] 1 containers: [5f215627d79c]
	I0803 18:06:42.134532    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:06:42.146692    4630 logs.go:276] 2 containers: [135bcc7cf850 d4fb7551ff98]
	I0803 18:06:42.146768    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:06:42.157402    4630 logs.go:276] 1 containers: [6d2574ad3d0f]
	I0803 18:06:42.157467    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:06:42.167973    4630 logs.go:276] 2 containers: [718f7dff79a6 a41ac171ebac]
	I0803 18:06:42.168049    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:06:42.181194    4630 logs.go:276] 0 containers: []
	W0803 18:06:42.181208    4630 logs.go:278] No container was found matching "kindnet"
	I0803 18:06:42.181267    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:06:42.197112    4630 logs.go:276] 2 containers: [1431b39e7f18 1c409466f72c]
	I0803 18:06:42.197133    4630 logs.go:123] Gathering logs for etcd [09bbda970489] ...
	I0803 18:06:42.197140    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09bbda970489"
	I0803 18:06:42.216668    4630 logs.go:123] Gathering logs for kube-scheduler [d4fb7551ff98] ...
	I0803 18:06:42.216683    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4fb7551ff98"
	I0803 18:06:42.231448    4630 logs.go:123] Gathering logs for storage-provisioner [1c409466f72c] ...
	I0803 18:06:42.231460    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c409466f72c"
	I0803 18:06:42.243316    4630 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:06:42.243329    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:06:42.279660    4630 logs.go:123] Gathering logs for kube-apiserver [db60aaba5af7] ...
	I0803 18:06:42.279672    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db60aaba5af7"
	I0803 18:06:42.317905    4630 logs.go:123] Gathering logs for kube-proxy [6d2574ad3d0f] ...
	I0803 18:06:42.317920    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2574ad3d0f"
	I0803 18:06:42.330923    4630 logs.go:123] Gathering logs for Docker ...
	I0803 18:06:42.330936    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:06:42.354254    4630 logs.go:123] Gathering logs for kubelet ...
	I0803 18:06:42.354261    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 18:06:42.392148    4630 logs.go:123] Gathering logs for kube-apiserver [81ed1708fd6b] ...
	I0803 18:06:42.392157    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ed1708fd6b"
	I0803 18:06:42.406269    4630 logs.go:123] Gathering logs for kube-scheduler [135bcc7cf850] ...
	I0803 18:06:42.406281    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 135bcc7cf850"
	I0803 18:06:42.436501    4630 logs.go:123] Gathering logs for kube-controller-manager [a41ac171ebac] ...
	I0803 18:06:42.436511    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41ac171ebac"
	I0803 18:06:42.456787    4630 logs.go:123] Gathering logs for container status ...
	I0803 18:06:42.456800    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:06:42.474522    4630 logs.go:123] Gathering logs for dmesg ...
	I0803 18:06:42.474532    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:06:42.478588    4630 logs.go:123] Gathering logs for etcd [eaff7d840b96] ...
	I0803 18:06:42.478594    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaff7d840b96"
	I0803 18:06:42.492659    4630 logs.go:123] Gathering logs for coredns [5f215627d79c] ...
	I0803 18:06:42.492671    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f215627d79c"
	I0803 18:06:42.503849    4630 logs.go:123] Gathering logs for kube-controller-manager [718f7dff79a6] ...
	I0803 18:06:42.503858    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 718f7dff79a6"
	I0803 18:06:42.521699    4630 logs.go:123] Gathering logs for storage-provisioner [1431b39e7f18] ...
	I0803 18:06:42.521712    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1431b39e7f18"
	I0803 18:06:45.036735    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:06:50.039195    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:06:50.039366    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:06:50.056935    4630 logs.go:276] 2 containers: [81ed1708fd6b db60aaba5af7]
	I0803 18:06:50.057035    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:06:50.070301    4630 logs.go:276] 2 containers: [09bbda970489 eaff7d840b96]
	I0803 18:06:50.070370    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:06:50.085157    4630 logs.go:276] 1 containers: [5f215627d79c]
	I0803 18:06:50.085225    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:06:50.095366    4630 logs.go:276] 2 containers: [135bcc7cf850 d4fb7551ff98]
	I0803 18:06:50.095436    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:06:50.105878    4630 logs.go:276] 1 containers: [6d2574ad3d0f]
	I0803 18:06:50.105944    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:06:50.116778    4630 logs.go:276] 2 containers: [718f7dff79a6 a41ac171ebac]
	I0803 18:06:50.116840    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:06:50.126633    4630 logs.go:276] 0 containers: []
	W0803 18:06:50.126642    4630 logs.go:278] No container was found matching "kindnet"
	I0803 18:06:50.126693    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:06:50.137562    4630 logs.go:276] 2 containers: [1431b39e7f18 1c409466f72c]
	I0803 18:06:50.137580    4630 logs.go:123] Gathering logs for kube-apiserver [81ed1708fd6b] ...
	I0803 18:06:50.137586    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ed1708fd6b"
	I0803 18:06:50.151989    4630 logs.go:123] Gathering logs for kube-proxy [6d2574ad3d0f] ...
	I0803 18:06:50.151999    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2574ad3d0f"
	I0803 18:06:50.163907    4630 logs.go:123] Gathering logs for storage-provisioner [1431b39e7f18] ...
	I0803 18:06:50.163920    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1431b39e7f18"
	I0803 18:06:50.175106    4630 logs.go:123] Gathering logs for storage-provisioner [1c409466f72c] ...
	I0803 18:06:50.175119    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c409466f72c"
	I0803 18:06:50.189044    4630 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:06:50.189055    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:06:50.224355    4630 logs.go:123] Gathering logs for kube-apiserver [db60aaba5af7] ...
	I0803 18:06:50.224370    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db60aaba5af7"
	I0803 18:06:50.263442    4630 logs.go:123] Gathering logs for etcd [09bbda970489] ...
	I0803 18:06:50.263454    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09bbda970489"
	I0803 18:06:50.277927    4630 logs.go:123] Gathering logs for kube-controller-manager [a41ac171ebac] ...
	I0803 18:06:50.277938    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41ac171ebac"
	I0803 18:06:50.289910    4630 logs.go:123] Gathering logs for container status ...
	I0803 18:06:50.289920    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:06:50.301937    4630 logs.go:123] Gathering logs for kubelet ...
	I0803 18:06:50.301946    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 18:06:50.341700    4630 logs.go:123] Gathering logs for etcd [eaff7d840b96] ...
	I0803 18:06:50.341712    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaff7d840b96"
	I0803 18:06:50.359477    4630 logs.go:123] Gathering logs for coredns [5f215627d79c] ...
	I0803 18:06:50.359487    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f215627d79c"
	I0803 18:06:50.379569    4630 logs.go:123] Gathering logs for kube-controller-manager [718f7dff79a6] ...
	I0803 18:06:50.379581    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 718f7dff79a6"
	I0803 18:06:50.397290    4630 logs.go:123] Gathering logs for Docker ...
	I0803 18:06:50.397303    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:06:50.420170    4630 logs.go:123] Gathering logs for dmesg ...
	I0803 18:06:50.420177    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:06:50.424166    4630 logs.go:123] Gathering logs for kube-scheduler [135bcc7cf850] ...
	I0803 18:06:50.424173    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 135bcc7cf850"
	I0803 18:06:50.438156    4630 logs.go:123] Gathering logs for kube-scheduler [d4fb7551ff98] ...
	I0803 18:06:50.438166    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4fb7551ff98"
	I0803 18:06:52.954709    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:06:57.956947    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:06:57.957102    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:06:57.972774    4630 logs.go:276] 2 containers: [81ed1708fd6b db60aaba5af7]
	I0803 18:06:57.972864    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:06:57.985737    4630 logs.go:276] 2 containers: [09bbda970489 eaff7d840b96]
	I0803 18:06:57.985814    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:06:57.996911    4630 logs.go:276] 1 containers: [5f215627d79c]
	I0803 18:06:57.996976    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:06:58.007521    4630 logs.go:276] 2 containers: [135bcc7cf850 d4fb7551ff98]
	I0803 18:06:58.007588    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:06:58.018439    4630 logs.go:276] 1 containers: [6d2574ad3d0f]
	I0803 18:06:58.018502    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:06:58.029769    4630 logs.go:276] 2 containers: [718f7dff79a6 a41ac171ebac]
	I0803 18:06:58.029844    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:06:58.041325    4630 logs.go:276] 0 containers: []
	W0803 18:06:58.041336    4630 logs.go:278] No container was found matching "kindnet"
	I0803 18:06:58.041405    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:06:58.051946    4630 logs.go:276] 2 containers: [1431b39e7f18 1c409466f72c]
	I0803 18:06:58.051961    4630 logs.go:123] Gathering logs for kube-proxy [6d2574ad3d0f] ...
	I0803 18:06:58.051967    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2574ad3d0f"
	I0803 18:06:58.063327    4630 logs.go:123] Gathering logs for storage-provisioner [1431b39e7f18] ...
	I0803 18:06:58.063338    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1431b39e7f18"
	I0803 18:06:58.074822    4630 logs.go:123] Gathering logs for container status ...
	I0803 18:06:58.074834    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:06:58.086610    4630 logs.go:123] Gathering logs for kube-controller-manager [718f7dff79a6] ...
	I0803 18:06:58.086621    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 718f7dff79a6"
	I0803 18:06:58.103893    4630 logs.go:123] Gathering logs for storage-provisioner [1c409466f72c] ...
	I0803 18:06:58.103904    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c409466f72c"
	I0803 18:06:58.116243    4630 logs.go:123] Gathering logs for dmesg ...
	I0803 18:06:58.116257    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:06:58.120398    4630 logs.go:123] Gathering logs for kube-apiserver [81ed1708fd6b] ...
	I0803 18:06:58.120404    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ed1708fd6b"
	I0803 18:06:58.138709    4630 logs.go:123] Gathering logs for coredns [5f215627d79c] ...
	I0803 18:06:58.138721    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f215627d79c"
	I0803 18:06:58.150644    4630 logs.go:123] Gathering logs for kube-scheduler [135bcc7cf850] ...
	I0803 18:06:58.150659    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 135bcc7cf850"
	I0803 18:06:58.164212    4630 logs.go:123] Gathering logs for kube-scheduler [d4fb7551ff98] ...
	I0803 18:06:58.164224    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4fb7551ff98"
	I0803 18:06:58.185188    4630 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:06:58.185198    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:06:58.220150    4630 logs.go:123] Gathering logs for kube-apiserver [db60aaba5af7] ...
	I0803 18:06:58.220161    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db60aaba5af7"
	I0803 18:06:58.258082    4630 logs.go:123] Gathering logs for etcd [eaff7d840b96] ...
	I0803 18:06:58.258093    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaff7d840b96"
	I0803 18:06:58.272668    4630 logs.go:123] Gathering logs for kube-controller-manager [a41ac171ebac] ...
	I0803 18:06:58.272678    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41ac171ebac"
	I0803 18:06:58.285446    4630 logs.go:123] Gathering logs for kubelet ...
	I0803 18:06:58.285457    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 18:06:58.322498    4630 logs.go:123] Gathering logs for etcd [09bbda970489] ...
	I0803 18:06:58.322507    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09bbda970489"
	I0803 18:06:58.336590    4630 logs.go:123] Gathering logs for Docker ...
	I0803 18:06:58.336602    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:07:00.862486    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:07:05.864643    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:07:05.864831    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:07:05.879854    4630 logs.go:276] 2 containers: [81ed1708fd6b db60aaba5af7]
	I0803 18:07:05.879938    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:07:05.891698    4630 logs.go:276] 2 containers: [09bbda970489 eaff7d840b96]
	I0803 18:07:05.891769    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:07:05.902247    4630 logs.go:276] 1 containers: [5f215627d79c]
	I0803 18:07:05.902318    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:07:05.916681    4630 logs.go:276] 2 containers: [135bcc7cf850 d4fb7551ff98]
	I0803 18:07:05.916789    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:07:05.927245    4630 logs.go:276] 1 containers: [6d2574ad3d0f]
	I0803 18:07:05.927308    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:07:05.938362    4630 logs.go:276] 2 containers: [718f7dff79a6 a41ac171ebac]
	I0803 18:07:05.938435    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:07:05.949854    4630 logs.go:276] 0 containers: []
	W0803 18:07:05.949866    4630 logs.go:278] No container was found matching "kindnet"
	I0803 18:07:05.949925    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:07:05.960453    4630 logs.go:276] 2 containers: [1431b39e7f18 1c409466f72c]
	I0803 18:07:05.960471    4630 logs.go:123] Gathering logs for dmesg ...
	I0803 18:07:05.960476    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:07:05.964607    4630 logs.go:123] Gathering logs for etcd [eaff7d840b96] ...
	I0803 18:07:05.964613    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaff7d840b96"
	I0803 18:07:05.979743    4630 logs.go:123] Gathering logs for kube-scheduler [d4fb7551ff98] ...
	I0803 18:07:05.979760    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4fb7551ff98"
	I0803 18:07:05.994470    4630 logs.go:123] Gathering logs for container status ...
	I0803 18:07:05.994483    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:07:06.009978    4630 logs.go:123] Gathering logs for storage-provisioner [1431b39e7f18] ...
	I0803 18:07:06.009995    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1431b39e7f18"
	I0803 18:07:06.021601    4630 logs.go:123] Gathering logs for storage-provisioner [1c409466f72c] ...
	I0803 18:07:06.021615    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c409466f72c"
	I0803 18:07:06.036173    4630 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:07:06.036184    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:07:06.071045    4630 logs.go:123] Gathering logs for etcd [09bbda970489] ...
	I0803 18:07:06.071056    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09bbda970489"
	I0803 18:07:06.085173    4630 logs.go:123] Gathering logs for kube-scheduler [135bcc7cf850] ...
	I0803 18:07:06.085184    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 135bcc7cf850"
	I0803 18:07:06.098728    4630 logs.go:123] Gathering logs for kube-proxy [6d2574ad3d0f] ...
	I0803 18:07:06.098742    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2574ad3d0f"
	I0803 18:07:06.110756    4630 logs.go:123] Gathering logs for kube-controller-manager [718f7dff79a6] ...
	I0803 18:07:06.110767    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 718f7dff79a6"
	I0803 18:07:06.129652    4630 logs.go:123] Gathering logs for kubelet ...
	I0803 18:07:06.129662    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 18:07:06.169463    4630 logs.go:123] Gathering logs for kube-apiserver [81ed1708fd6b] ...
	I0803 18:07:06.169473    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ed1708fd6b"
	I0803 18:07:06.185088    4630 logs.go:123] Gathering logs for kube-apiserver [db60aaba5af7] ...
	I0803 18:07:06.185098    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db60aaba5af7"
	I0803 18:07:06.223238    4630 logs.go:123] Gathering logs for kube-controller-manager [a41ac171ebac] ...
	I0803 18:07:06.223247    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41ac171ebac"
	I0803 18:07:06.235459    4630 logs.go:123] Gathering logs for Docker ...
	I0803 18:07:06.235471    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:07:06.257721    4630 logs.go:123] Gathering logs for coredns [5f215627d79c] ...
	I0803 18:07:06.257730    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f215627d79c"
	I0803 18:07:08.771501    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:07:13.773618    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:07:13.773837    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:07:13.791657    4630 logs.go:276] 2 containers: [81ed1708fd6b db60aaba5af7]
	I0803 18:07:13.791753    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:07:13.805625    4630 logs.go:276] 2 containers: [09bbda970489 eaff7d840b96]
	I0803 18:07:13.805701    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:07:13.817620    4630 logs.go:276] 1 containers: [5f215627d79c]
	I0803 18:07:13.817688    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:07:13.828602    4630 logs.go:276] 2 containers: [135bcc7cf850 d4fb7551ff98]
	I0803 18:07:13.828673    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:07:13.839133    4630 logs.go:276] 1 containers: [6d2574ad3d0f]
	I0803 18:07:13.839207    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:07:13.849875    4630 logs.go:276] 2 containers: [718f7dff79a6 a41ac171ebac]
	I0803 18:07:13.849945    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:07:13.860488    4630 logs.go:276] 0 containers: []
	W0803 18:07:13.860501    4630 logs.go:278] No container was found matching "kindnet"
	I0803 18:07:13.860555    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:07:13.870902    4630 logs.go:276] 2 containers: [1431b39e7f18 1c409466f72c]
	I0803 18:07:13.870919    4630 logs.go:123] Gathering logs for etcd [eaff7d840b96] ...
	I0803 18:07:13.870925    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaff7d840b96"
	I0803 18:07:13.885722    4630 logs.go:123] Gathering logs for coredns [5f215627d79c] ...
	I0803 18:07:13.885736    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f215627d79c"
	I0803 18:07:13.901584    4630 logs.go:123] Gathering logs for kube-controller-manager [a41ac171ebac] ...
	I0803 18:07:13.901600    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41ac171ebac"
	I0803 18:07:13.914115    4630 logs.go:123] Gathering logs for dmesg ...
	I0803 18:07:13.914126    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:07:13.918386    4630 logs.go:123] Gathering logs for kube-apiserver [db60aaba5af7] ...
	I0803 18:07:13.918394    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db60aaba5af7"
	I0803 18:07:13.958268    4630 logs.go:123] Gathering logs for kube-proxy [6d2574ad3d0f] ...
	I0803 18:07:13.958278    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2574ad3d0f"
	I0803 18:07:13.970348    4630 logs.go:123] Gathering logs for kube-controller-manager [718f7dff79a6] ...
	I0803 18:07:13.970358    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 718f7dff79a6"
	I0803 18:07:13.989582    4630 logs.go:123] Gathering logs for storage-provisioner [1431b39e7f18] ...
	I0803 18:07:13.989592    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1431b39e7f18"
	I0803 18:07:14.003725    4630 logs.go:123] Gathering logs for storage-provisioner [1c409466f72c] ...
	I0803 18:07:14.003736    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c409466f72c"
	I0803 18:07:14.015002    4630 logs.go:123] Gathering logs for Docker ...
	I0803 18:07:14.015013    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:07:14.037263    4630 logs.go:123] Gathering logs for container status ...
	I0803 18:07:14.037271    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:07:14.049420    4630 logs.go:123] Gathering logs for kube-apiserver [81ed1708fd6b] ...
	I0803 18:07:14.049431    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ed1708fd6b"
	I0803 18:07:14.063995    4630 logs.go:123] Gathering logs for kube-scheduler [135bcc7cf850] ...
	I0803 18:07:14.064007    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 135bcc7cf850"
	I0803 18:07:14.078130    4630 logs.go:123] Gathering logs for kube-scheduler [d4fb7551ff98] ...
	I0803 18:07:14.078146    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4fb7551ff98"
	I0803 18:07:14.093105    4630 logs.go:123] Gathering logs for kubelet ...
	I0803 18:07:14.093116    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 18:07:14.130666    4630 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:07:14.130674    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:07:14.166513    4630 logs.go:123] Gathering logs for etcd [09bbda970489] ...
	I0803 18:07:14.166525    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09bbda970489"
	I0803 18:07:16.684759    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:07:21.686858    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:07:21.686972    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:07:21.703868    4630 logs.go:276] 2 containers: [81ed1708fd6b db60aaba5af7]
	I0803 18:07:21.703941    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:07:21.714107    4630 logs.go:276] 2 containers: [09bbda970489 eaff7d840b96]
	I0803 18:07:21.714179    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:07:21.729457    4630 logs.go:276] 1 containers: [5f215627d79c]
	I0803 18:07:21.729527    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:07:21.740671    4630 logs.go:276] 2 containers: [135bcc7cf850 d4fb7551ff98]
	I0803 18:07:21.740737    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:07:21.751353    4630 logs.go:276] 1 containers: [6d2574ad3d0f]
	I0803 18:07:21.751424    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:07:21.761597    4630 logs.go:276] 2 containers: [718f7dff79a6 a41ac171ebac]
	I0803 18:07:21.761673    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:07:21.772344    4630 logs.go:276] 0 containers: []
	W0803 18:07:21.772356    4630 logs.go:278] No container was found matching "kindnet"
	I0803 18:07:21.772414    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:07:21.783250    4630 logs.go:276] 2 containers: [1431b39e7f18 1c409466f72c]
	I0803 18:07:21.783269    4630 logs.go:123] Gathering logs for kube-apiserver [81ed1708fd6b] ...
	I0803 18:07:21.783276    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ed1708fd6b"
	I0803 18:07:21.797200    4630 logs.go:123] Gathering logs for kube-scheduler [135bcc7cf850] ...
	I0803 18:07:21.797212    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 135bcc7cf850"
	I0803 18:07:21.810782    4630 logs.go:123] Gathering logs for kube-controller-manager [718f7dff79a6] ...
	I0803 18:07:21.810791    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 718f7dff79a6"
	I0803 18:07:21.828686    4630 logs.go:123] Gathering logs for storage-provisioner [1431b39e7f18] ...
	I0803 18:07:21.828696    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1431b39e7f18"
	I0803 18:07:21.844825    4630 logs.go:123] Gathering logs for Docker ...
	I0803 18:07:21.844838    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:07:21.867669    4630 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:07:21.867679    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:07:21.902580    4630 logs.go:123] Gathering logs for etcd [eaff7d840b96] ...
	I0803 18:07:21.902592    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaff7d840b96"
	I0803 18:07:21.916782    4630 logs.go:123] Gathering logs for kube-proxy [6d2574ad3d0f] ...
	I0803 18:07:21.916792    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2574ad3d0f"
	I0803 18:07:21.930060    4630 logs.go:123] Gathering logs for storage-provisioner [1c409466f72c] ...
	I0803 18:07:21.930070    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c409466f72c"
	I0803 18:07:21.941416    4630 logs.go:123] Gathering logs for dmesg ...
	I0803 18:07:21.941428    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:07:21.946404    4630 logs.go:123] Gathering logs for kube-apiserver [db60aaba5af7] ...
	I0803 18:07:21.946412    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db60aaba5af7"
	I0803 18:07:21.984050    4630 logs.go:123] Gathering logs for etcd [09bbda970489] ...
	I0803 18:07:21.984060    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09bbda970489"
	I0803 18:07:21.998031    4630 logs.go:123] Gathering logs for kube-scheduler [d4fb7551ff98] ...
	I0803 18:07:21.998041    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4fb7551ff98"
	I0803 18:07:22.013094    4630 logs.go:123] Gathering logs for kube-controller-manager [a41ac171ebac] ...
	I0803 18:07:22.013104    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41ac171ebac"
	I0803 18:07:22.029869    4630 logs.go:123] Gathering logs for kubelet ...
	I0803 18:07:22.029885    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 18:07:22.069356    4630 logs.go:123] Gathering logs for coredns [5f215627d79c] ...
	I0803 18:07:22.069366    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f215627d79c"
	I0803 18:07:22.080955    4630 logs.go:123] Gathering logs for container status ...
	I0803 18:07:22.080969    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:07:24.596015    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:07:29.598270    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:07:29.598542    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:07:29.624614    4630 logs.go:276] 2 containers: [81ed1708fd6b db60aaba5af7]
	I0803 18:07:29.624741    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:07:29.652036    4630 logs.go:276] 2 containers: [09bbda970489 eaff7d840b96]
	I0803 18:07:29.652112    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:07:29.663773    4630 logs.go:276] 1 containers: [5f215627d79c]
	I0803 18:07:29.663843    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:07:29.674239    4630 logs.go:276] 2 containers: [135bcc7cf850 d4fb7551ff98]
	I0803 18:07:29.674301    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:07:29.684829    4630 logs.go:276] 1 containers: [6d2574ad3d0f]
	I0803 18:07:29.684900    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:07:29.695404    4630 logs.go:276] 2 containers: [718f7dff79a6 a41ac171ebac]
	I0803 18:07:29.695474    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:07:29.705675    4630 logs.go:276] 0 containers: []
	W0803 18:07:29.705686    4630 logs.go:278] No container was found matching "kindnet"
	I0803 18:07:29.705744    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:07:29.716293    4630 logs.go:276] 2 containers: [1431b39e7f18 1c409466f72c]
	I0803 18:07:29.716308    4630 logs.go:123] Gathering logs for kubelet ...
	I0803 18:07:29.716314    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 18:07:29.753799    4630 logs.go:123] Gathering logs for kube-proxy [6d2574ad3d0f] ...
	I0803 18:07:29.753806    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2574ad3d0f"
	I0803 18:07:29.765597    4630 logs.go:123] Gathering logs for storage-provisioner [1c409466f72c] ...
	I0803 18:07:29.765607    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c409466f72c"
	I0803 18:07:29.777807    4630 logs.go:123] Gathering logs for etcd [eaff7d840b96] ...
	I0803 18:07:29.777821    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaff7d840b96"
	I0803 18:07:29.792328    4630 logs.go:123] Gathering logs for kube-controller-manager [a41ac171ebac] ...
	I0803 18:07:29.792341    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41ac171ebac"
	I0803 18:07:29.805448    4630 logs.go:123] Gathering logs for coredns [5f215627d79c] ...
	I0803 18:07:29.805458    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f215627d79c"
	I0803 18:07:29.817311    4630 logs.go:123] Gathering logs for kube-scheduler [d4fb7551ff98] ...
	I0803 18:07:29.817322    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4fb7551ff98"
	I0803 18:07:29.831831    4630 logs.go:123] Gathering logs for kube-controller-manager [718f7dff79a6] ...
	I0803 18:07:29.831840    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 718f7dff79a6"
	I0803 18:07:29.849358    4630 logs.go:123] Gathering logs for Docker ...
	I0803 18:07:29.849368    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:07:29.871626    4630 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:07:29.871632    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:07:29.906573    4630 logs.go:123] Gathering logs for kube-apiserver [db60aaba5af7] ...
	I0803 18:07:29.906583    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db60aaba5af7"
	I0803 18:07:29.946492    4630 logs.go:123] Gathering logs for etcd [09bbda970489] ...
	I0803 18:07:29.946505    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09bbda970489"
	I0803 18:07:29.960639    4630 logs.go:123] Gathering logs for storage-provisioner [1431b39e7f18] ...
	I0803 18:07:29.960650    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1431b39e7f18"
	I0803 18:07:29.971814    4630 logs.go:123] Gathering logs for container status ...
	I0803 18:07:29.971828    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:07:29.983474    4630 logs.go:123] Gathering logs for dmesg ...
	I0803 18:07:29.983485    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:07:29.987414    4630 logs.go:123] Gathering logs for kube-apiserver [81ed1708fd6b] ...
	I0803 18:07:29.987422    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ed1708fd6b"
	I0803 18:07:30.002624    4630 logs.go:123] Gathering logs for kube-scheduler [135bcc7cf850] ...
	I0803 18:07:30.002634    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 135bcc7cf850"
	I0803 18:07:32.518901    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:07:37.521215    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:07:37.521561    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:07:37.553926    4630 logs.go:276] 2 containers: [81ed1708fd6b db60aaba5af7]
	I0803 18:07:37.554060    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:07:37.573936    4630 logs.go:276] 2 containers: [09bbda970489 eaff7d840b96]
	I0803 18:07:37.574037    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:07:37.587842    4630 logs.go:276] 1 containers: [5f215627d79c]
	I0803 18:07:37.587939    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:07:37.600149    4630 logs.go:276] 2 containers: [135bcc7cf850 d4fb7551ff98]
	I0803 18:07:37.600224    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:07:37.610989    4630 logs.go:276] 1 containers: [6d2574ad3d0f]
	I0803 18:07:37.611063    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:07:37.625265    4630 logs.go:276] 2 containers: [718f7dff79a6 a41ac171ebac]
	I0803 18:07:37.625339    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:07:37.640317    4630 logs.go:276] 0 containers: []
	W0803 18:07:37.640328    4630 logs.go:278] No container was found matching "kindnet"
	I0803 18:07:37.640388    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:07:37.654153    4630 logs.go:276] 2 containers: [1431b39e7f18 1c409466f72c]
	I0803 18:07:37.654172    4630 logs.go:123] Gathering logs for kubelet ...
	I0803 18:07:37.654178    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 18:07:37.693851    4630 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:07:37.693861    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:07:37.734264    4630 logs.go:123] Gathering logs for etcd [09bbda970489] ...
	I0803 18:07:37.734284    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09bbda970489"
	I0803 18:07:37.749488    4630 logs.go:123] Gathering logs for kube-scheduler [d4fb7551ff98] ...
	I0803 18:07:37.749501    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4fb7551ff98"
	I0803 18:07:37.765788    4630 logs.go:123] Gathering logs for kube-apiserver [db60aaba5af7] ...
	I0803 18:07:37.765801    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db60aaba5af7"
	I0803 18:07:37.809446    4630 logs.go:123] Gathering logs for kube-proxy [6d2574ad3d0f] ...
	I0803 18:07:37.809460    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2574ad3d0f"
	I0803 18:07:37.822585    4630 logs.go:123] Gathering logs for storage-provisioner [1c409466f72c] ...
	I0803 18:07:37.822596    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c409466f72c"
	I0803 18:07:37.834503    4630 logs.go:123] Gathering logs for container status ...
	I0803 18:07:37.834515    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:07:37.848039    4630 logs.go:123] Gathering logs for kube-apiserver [81ed1708fd6b] ...
	I0803 18:07:37.848050    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ed1708fd6b"
	I0803 18:07:37.862460    4630 logs.go:123] Gathering logs for coredns [5f215627d79c] ...
	I0803 18:07:37.862470    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f215627d79c"
	I0803 18:07:37.877214    4630 logs.go:123] Gathering logs for kube-scheduler [135bcc7cf850] ...
	I0803 18:07:37.877225    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 135bcc7cf850"
	I0803 18:07:37.892657    4630 logs.go:123] Gathering logs for kube-controller-manager [a41ac171ebac] ...
	I0803 18:07:37.892668    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41ac171ebac"
	I0803 18:07:37.905718    4630 logs.go:123] Gathering logs for dmesg ...
	I0803 18:07:37.905731    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:07:37.911122    4630 logs.go:123] Gathering logs for etcd [eaff7d840b96] ...
	I0803 18:07:37.911129    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaff7d840b96"
	I0803 18:07:37.926147    4630 logs.go:123] Gathering logs for kube-controller-manager [718f7dff79a6] ...
	I0803 18:07:37.926161    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 718f7dff79a6"
	I0803 18:07:37.944960    4630 logs.go:123] Gathering logs for storage-provisioner [1431b39e7f18] ...
	I0803 18:07:37.944975    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1431b39e7f18"
	I0803 18:07:37.961778    4630 logs.go:123] Gathering logs for Docker ...
	I0803 18:07:37.961788    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:07:40.487321    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:07:45.489573    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:07:45.489789    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:07:45.505719    4630 logs.go:276] 2 containers: [81ed1708fd6b db60aaba5af7]
	I0803 18:07:45.505794    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:07:45.518464    4630 logs.go:276] 2 containers: [09bbda970489 eaff7d840b96]
	I0803 18:07:45.518537    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:07:45.529281    4630 logs.go:276] 1 containers: [5f215627d79c]
	I0803 18:07:45.529349    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:07:45.539865    4630 logs.go:276] 2 containers: [135bcc7cf850 d4fb7551ff98]
	I0803 18:07:45.539935    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:07:45.550761    4630 logs.go:276] 1 containers: [6d2574ad3d0f]
	I0803 18:07:45.550833    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:07:45.568170    4630 logs.go:276] 2 containers: [718f7dff79a6 a41ac171ebac]
	I0803 18:07:45.568239    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:07:45.579486    4630 logs.go:276] 0 containers: []
	W0803 18:07:45.579495    4630 logs.go:278] No container was found matching "kindnet"
	I0803 18:07:45.579551    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:07:45.590487    4630 logs.go:276] 2 containers: [1431b39e7f18 1c409466f72c]
	I0803 18:07:45.590506    4630 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:07:45.590512    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:07:45.625712    4630 logs.go:123] Gathering logs for kube-apiserver [81ed1708fd6b] ...
	I0803 18:07:45.625724    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ed1708fd6b"
	I0803 18:07:45.640291    4630 logs.go:123] Gathering logs for kube-apiserver [db60aaba5af7] ...
	I0803 18:07:45.640302    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db60aaba5af7"
	I0803 18:07:45.679200    4630 logs.go:123] Gathering logs for etcd [eaff7d840b96] ...
	I0803 18:07:45.679211    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaff7d840b96"
	I0803 18:07:45.695536    4630 logs.go:123] Gathering logs for kube-scheduler [135bcc7cf850] ...
	I0803 18:07:45.695546    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 135bcc7cf850"
	I0803 18:07:45.723670    4630 logs.go:123] Gathering logs for kube-scheduler [d4fb7551ff98] ...
	I0803 18:07:45.723686    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4fb7551ff98"
	I0803 18:07:45.750428    4630 logs.go:123] Gathering logs for kubelet ...
	I0803 18:07:45.750439    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 18:07:45.788319    4630 logs.go:123] Gathering logs for dmesg ...
	I0803 18:07:45.788327    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:07:45.792521    4630 logs.go:123] Gathering logs for kube-controller-manager [a41ac171ebac] ...
	I0803 18:07:45.792531    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41ac171ebac"
	I0803 18:07:45.805067    4630 logs.go:123] Gathering logs for etcd [09bbda970489] ...
	I0803 18:07:45.805077    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09bbda970489"
	I0803 18:07:45.818949    4630 logs.go:123] Gathering logs for coredns [5f215627d79c] ...
	I0803 18:07:45.818959    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f215627d79c"
	I0803 18:07:45.830478    4630 logs.go:123] Gathering logs for kube-controller-manager [718f7dff79a6] ...
	I0803 18:07:45.830489    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 718f7dff79a6"
	I0803 18:07:45.848464    4630 logs.go:123] Gathering logs for storage-provisioner [1431b39e7f18] ...
	I0803 18:07:45.848475    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1431b39e7f18"
	I0803 18:07:45.860500    4630 logs.go:123] Gathering logs for kube-proxy [6d2574ad3d0f] ...
	I0803 18:07:45.860509    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2574ad3d0f"
	I0803 18:07:45.871859    4630 logs.go:123] Gathering logs for storage-provisioner [1c409466f72c] ...
	I0803 18:07:45.871870    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c409466f72c"
	I0803 18:07:45.882537    4630 logs.go:123] Gathering logs for Docker ...
	I0803 18:07:45.882548    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:07:45.904912    4630 logs.go:123] Gathering logs for container status ...
	I0803 18:07:45.904920    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:07:48.418774    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:07:53.419591    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:07:53.419719    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:07:53.431250    4630 logs.go:276] 2 containers: [81ed1708fd6b db60aaba5af7]
	I0803 18:07:53.431322    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:07:53.443841    4630 logs.go:276] 2 containers: [09bbda970489 eaff7d840b96]
	I0803 18:07:53.443904    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:07:53.457468    4630 logs.go:276] 1 containers: [5f215627d79c]
	I0803 18:07:53.457540    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:07:53.468260    4630 logs.go:276] 2 containers: [135bcc7cf850 d4fb7551ff98]
	I0803 18:07:53.468331    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:07:53.479146    4630 logs.go:276] 1 containers: [6d2574ad3d0f]
	I0803 18:07:53.479211    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:07:53.489871    4630 logs.go:276] 2 containers: [718f7dff79a6 a41ac171ebac]
	I0803 18:07:53.489950    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:07:53.500759    4630 logs.go:276] 0 containers: []
	W0803 18:07:53.500769    4630 logs.go:278] No container was found matching "kindnet"
	I0803 18:07:53.500830    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:07:53.511413    4630 logs.go:276] 2 containers: [1431b39e7f18 1c409466f72c]
	I0803 18:07:53.511433    4630 logs.go:123] Gathering logs for kube-apiserver [db60aaba5af7] ...
	I0803 18:07:53.511441    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db60aaba5af7"
	I0803 18:07:53.550011    4630 logs.go:123] Gathering logs for kube-proxy [6d2574ad3d0f] ...
	I0803 18:07:53.550021    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2574ad3d0f"
	I0803 18:07:53.562208    4630 logs.go:123] Gathering logs for container status ...
	I0803 18:07:53.562219    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:07:53.577711    4630 logs.go:123] Gathering logs for dmesg ...
	I0803 18:07:53.577724    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:07:53.582217    4630 logs.go:123] Gathering logs for kube-apiserver [81ed1708fd6b] ...
	I0803 18:07:53.582224    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ed1708fd6b"
	I0803 18:07:53.596779    4630 logs.go:123] Gathering logs for Docker ...
	I0803 18:07:53.596789    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:07:53.618341    4630 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:07:53.618348    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:07:53.654730    4630 logs.go:123] Gathering logs for storage-provisioner [1c409466f72c] ...
	I0803 18:07:53.654741    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c409466f72c"
	I0803 18:07:53.666648    4630 logs.go:123] Gathering logs for kube-scheduler [135bcc7cf850] ...
	I0803 18:07:53.666658    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 135bcc7cf850"
	I0803 18:07:53.680588    4630 logs.go:123] Gathering logs for kube-scheduler [d4fb7551ff98] ...
	I0803 18:07:53.680598    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4fb7551ff98"
	I0803 18:07:53.695272    4630 logs.go:123] Gathering logs for storage-provisioner [1431b39e7f18] ...
	I0803 18:07:53.695281    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1431b39e7f18"
	I0803 18:07:53.706901    4630 logs.go:123] Gathering logs for kubelet ...
	I0803 18:07:53.706915    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 18:07:53.747002    4630 logs.go:123] Gathering logs for etcd [eaff7d840b96] ...
	I0803 18:07:53.747014    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaff7d840b96"
	I0803 18:07:53.761923    4630 logs.go:123] Gathering logs for kube-controller-manager [718f7dff79a6] ...
	I0803 18:07:53.761934    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 718f7dff79a6"
	I0803 18:07:53.779792    4630 logs.go:123] Gathering logs for kube-controller-manager [a41ac171ebac] ...
	I0803 18:07:53.779803    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a41ac171ebac"
	I0803 18:07:53.792413    4630 logs.go:123] Gathering logs for etcd [09bbda970489] ...
	I0803 18:07:53.792422    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09bbda970489"
	I0803 18:07:53.806035    4630 logs.go:123] Gathering logs for coredns [5f215627d79c] ...
	I0803 18:07:53.806045    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f215627d79c"
	I0803 18:07:56.319771    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:08:01.322010    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:08:01.322086    4630 kubeadm.go:597] duration metric: took 4m4.117150917s to restartPrimaryControlPlane
	W0803 18:08:01.322180    4630 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0803 18:08:01.322218    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0803 18:08:02.338583    4630 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.016381792s)
	I0803 18:08:02.338657    4630 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 18:08:02.343631    4630 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0803 18:08:02.346668    4630 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0803 18:08:02.349453    4630 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0803 18:08:02.349458    4630 kubeadm.go:157] found existing configuration files:
	
	I0803 18:08:02.349484    4630 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50497 /etc/kubernetes/admin.conf
	I0803 18:08:02.351874    4630 kubeadm.go:163] "https://control-plane.minikube.internal:50497" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50497 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0803 18:08:02.351895    4630 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0803 18:08:02.354757    4630 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50497 /etc/kubernetes/kubelet.conf
	I0803 18:08:02.357619    4630 kubeadm.go:163] "https://control-plane.minikube.internal:50497" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50497 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0803 18:08:02.357639    4630 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0803 18:08:02.360417    4630 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50497 /etc/kubernetes/controller-manager.conf
	I0803 18:08:02.362925    4630 kubeadm.go:163] "https://control-plane.minikube.internal:50497" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50497 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0803 18:08:02.362944    4630 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0803 18:08:02.365826    4630 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50497 /etc/kubernetes/scheduler.conf
	I0803 18:08:02.368469    4630 kubeadm.go:163] "https://control-plane.minikube.internal:50497" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50497 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0803 18:08:02.368490    4630 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0803 18:08:02.371112    4630 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0803 18:08:02.389679    4630 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0803 18:08:02.389705    4630 kubeadm.go:310] [preflight] Running pre-flight checks
	I0803 18:08:02.438313    4630 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0803 18:08:02.438364    4630 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0803 18:08:02.438407    4630 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0803 18:08:02.490315    4630 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0803 18:08:02.494549    4630 out.go:204]   - Generating certificates and keys ...
	I0803 18:08:02.494590    4630 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0803 18:08:02.494626    4630 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0803 18:08:02.494663    4630 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0803 18:08:02.494697    4630 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0803 18:08:02.494751    4630 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0803 18:08:02.494786    4630 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0803 18:08:02.494820    4630 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0803 18:08:02.494850    4630 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0803 18:08:02.494889    4630 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0803 18:08:02.494930    4630 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0803 18:08:02.494957    4630 kubeadm.go:310] [certs] Using the existing "sa" key
	I0803 18:08:02.494987    4630 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0803 18:08:02.539697    4630 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0803 18:08:02.597198    4630 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0803 18:08:02.733869    4630 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0803 18:08:02.834327    4630 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0803 18:08:02.866213    4630 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0803 18:08:02.866669    4630 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0803 18:08:02.866690    4630 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0803 18:08:02.952409    4630 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0803 18:08:02.955576    4630 out.go:204]   - Booting up control plane ...
	I0803 18:08:02.955625    4630 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0803 18:08:02.955663    4630 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0803 18:08:02.955705    4630 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0803 18:08:02.955750    4630 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0803 18:08:02.955848    4630 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0803 18:08:06.956623    4630 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.001337 seconds
	I0803 18:08:06.956688    4630 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0803 18:08:06.962007    4630 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0803 18:08:07.471470    4630 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0803 18:08:07.471644    4630 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-413000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0803 18:08:07.974712    4630 kubeadm.go:310] [bootstrap-token] Using token: ns3qrc.zgs4s8hhalx61p06
	I0803 18:08:07.978119    4630 out.go:204]   - Configuring RBAC rules ...
	I0803 18:08:07.978190    4630 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0803 18:08:07.978238    4630 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0803 18:08:07.980116    4630 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0803 18:08:07.984535    4630 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0803 18:08:07.985464    4630 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0803 18:08:07.986307    4630 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0803 18:08:07.989269    4630 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0803 18:08:08.162942    4630 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0803 18:08:08.378344    4630 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0803 18:08:08.378791    4630 kubeadm.go:310] 
	I0803 18:08:08.378819    4630 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0803 18:08:08.378824    4630 kubeadm.go:310] 
	I0803 18:08:08.378860    4630 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0803 18:08:08.378865    4630 kubeadm.go:310] 
	I0803 18:08:08.378876    4630 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0803 18:08:08.378915    4630 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0803 18:08:08.378941    4630 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0803 18:08:08.378945    4630 kubeadm.go:310] 
	I0803 18:08:08.378973    4630 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0803 18:08:08.378978    4630 kubeadm.go:310] 
	I0803 18:08:08.379006    4630 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0803 18:08:08.379011    4630 kubeadm.go:310] 
	I0803 18:08:08.379046    4630 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0803 18:08:08.379087    4630 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0803 18:08:08.379126    4630 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0803 18:08:08.379131    4630 kubeadm.go:310] 
	I0803 18:08:08.379174    4630 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0803 18:08:08.379216    4630 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0803 18:08:08.379220    4630 kubeadm.go:310] 
	I0803 18:08:08.379263    4630 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ns3qrc.zgs4s8hhalx61p06 \
	I0803 18:08:08.379317    4630 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:8926886cd496fcdb8fb5b92a5ce19b9a5533dd397e42f479b7664c72b739cada \
	I0803 18:08:08.379330    4630 kubeadm.go:310] 	--control-plane 
	I0803 18:08:08.379334    4630 kubeadm.go:310] 
	I0803 18:08:08.379375    4630 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0803 18:08:08.379379    4630 kubeadm.go:310] 
	I0803 18:08:08.379426    4630 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ns3qrc.zgs4s8hhalx61p06 \
	I0803 18:08:08.379478    4630 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:8926886cd496fcdb8fb5b92a5ce19b9a5533dd397e42f479b7664c72b739cada 
	I0803 18:08:08.379669    4630 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0803 18:08:08.379716    4630 cni.go:84] Creating CNI manager for ""
	I0803 18:08:08.379727    4630 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0803 18:08:08.384527    4630 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0803 18:08:08.388453    4630 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0803 18:08:08.391894    4630 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0803 18:08:08.396394    4630 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0803 18:08:08.396452    4630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 18:08:08.396456    4630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-413000 minikube.k8s.io/updated_at=2024_08_03T18_08_08_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=40edfa2c305b426121bc436e9f593c10662235e6 minikube.k8s.io/name=stopped-upgrade-413000 minikube.k8s.io/primary=true
	I0803 18:08:08.432248    4630 kubeadm.go:1113] duration metric: took 35.834291ms to wait for elevateKubeSystemPrivileges
	I0803 18:08:08.439824    4630 ops.go:34] apiserver oom_adj: -16
	I0803 18:08:08.439838    4630 kubeadm.go:394] duration metric: took 4m11.249546291s to StartCluster
	I0803 18:08:08.439850    4630 settings.go:142] acquiring lock: {Name:mkc455f89a0a1d96857baea22a1ca4141ab02c5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 18:08:08.439953    4630 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19364-1166/kubeconfig
	I0803 18:08:08.440388    4630 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19364-1166/kubeconfig: {Name:mk0a3c55e1982b2d92db1034b47f8334d27942c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 18:08:08.440586    4630 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 18:08:08.440687    4630 config.go:182] Loaded profile config "stopped-upgrade-413000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0803 18:08:08.440644    4630 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0803 18:08:08.440734    4630 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-413000"
	I0803 18:08:08.440740    4630 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-413000"
	I0803 18:08:08.440748    4630 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-413000"
	W0803 18:08:08.440752    4630 addons.go:243] addon storage-provisioner should already be in state true
	I0803 18:08:08.440753    4630 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-413000"
	I0803 18:08:08.440763    4630 host.go:66] Checking if "stopped-upgrade-413000" exists ...
	I0803 18:08:08.441181    4630 retry.go:31] will retry after 745.250721ms: connect: dial unix /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/stopped-upgrade-413000/monitor: connect: connection refused
	I0803 18:08:08.444459    4630 out.go:177] * Verifying Kubernetes components...
	I0803 18:08:08.454421    4630 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0803 18:08:08.460398    4630 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 18:08:08.464471    4630 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0803 18:08:08.464481    4630 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0803 18:08:08.464490    4630 sshutil.go:53] new ssh client: &{IP:localhost Port:50464 SSHKeyPath:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/stopped-upgrade-413000/id_rsa Username:docker}
	I0803 18:08:08.546453    4630 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0803 18:08:08.552295    4630 api_server.go:52] waiting for apiserver process to appear ...
	I0803 18:08:08.552340    4630 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 18:08:08.556974    4630 api_server.go:72] duration metric: took 116.379333ms to wait for apiserver process to appear ...
	I0803 18:08:08.556984    4630 api_server.go:88] waiting for apiserver healthz status ...
	I0803 18:08:08.556993    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:08:08.604308    4630 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0803 18:08:09.189511    4630 kapi.go:59] client config for stopped-upgrade-413000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/stopped-upgrade-413000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/stopped-upgrade-413000/client.key", CAFile:"/Users/jenkins/minikube-integration/19364-1166/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1019a01b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0803 18:08:09.189640    4630 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-413000"
	W0803 18:08:09.189647    4630 addons.go:243] addon default-storageclass should already be in state true
	I0803 18:08:09.189660    4630 host.go:66] Checking if "stopped-upgrade-413000" exists ...
	I0803 18:08:09.190391    4630 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0803 18:08:09.190398    4630 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0803 18:08:09.190405    4630 sshutil.go:53] new ssh client: &{IP:localhost Port:50464 SSHKeyPath:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/stopped-upgrade-413000/id_rsa Username:docker}
	I0803 18:08:09.225769    4630 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0803 18:08:13.558938    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:08:13.558983    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:08:18.559181    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:08:18.559218    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:08:23.559844    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:08:23.559879    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:08:28.560287    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:08:28.560307    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:08:33.560861    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:08:33.560896    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:08:38.561687    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:08:38.561720    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0803 18:08:39.290324    4630 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0803 18:08:39.300857    4630 out.go:177] * Enabled addons: storage-provisioner
	I0803 18:08:39.309759    4630 addons.go:510] duration metric: took 30.870012792s for enable addons: enabled=[storage-provisioner]
	I0803 18:08:43.562723    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:08:43.562745    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:08:48.564387    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:08:48.564413    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:08:53.565685    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:08:53.565719    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:08:58.567838    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:08:58.567881    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:09:03.569981    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:09:03.570024    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:09:08.572056    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:09:08.572149    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:09:08.582953    4630 logs.go:276] 1 containers: [77262126dc60]
	I0803 18:09:08.583022    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:09:08.593406    4630 logs.go:276] 1 containers: [983bbf4c86df]
	I0803 18:09:08.593475    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:09:08.603285    4630 logs.go:276] 2 containers: [b84f35bdcbf8 c2611cb1b266]
	I0803 18:09:08.603355    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:09:08.613596    4630 logs.go:276] 1 containers: [2cffae83c168]
	I0803 18:09:08.613658    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:09:08.624082    4630 logs.go:276] 1 containers: [342bc0852b67]
	I0803 18:09:08.624153    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:09:08.634307    4630 logs.go:276] 1 containers: [80d0b706be3f]
	I0803 18:09:08.634381    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:09:08.644438    4630 logs.go:276] 0 containers: []
	W0803 18:09:08.644450    4630 logs.go:278] No container was found matching "kindnet"
	I0803 18:09:08.644509    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:09:08.654652    4630 logs.go:276] 1 containers: [f6fd4b2b4472]
	I0803 18:09:08.654668    4630 logs.go:123] Gathering logs for kube-scheduler [2cffae83c168] ...
	I0803 18:09:08.654674    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cffae83c168"
	I0803 18:09:08.671779    4630 logs.go:123] Gathering logs for Docker ...
	I0803 18:09:08.671789    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:09:08.696907    4630 logs.go:123] Gathering logs for dmesg ...
	I0803 18:09:08.696917    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:09:08.701501    4630 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:09:08.701508    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:09:08.737684    4630 logs.go:123] Gathering logs for coredns [c2611cb1b266] ...
	I0803 18:09:08.737694    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2611cb1b266"
	I0803 18:09:08.749457    4630 logs.go:123] Gathering logs for coredns [b84f35bdcbf8] ...
	I0803 18:09:08.749468    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b84f35bdcbf8"
	I0803 18:09:08.761434    4630 logs.go:123] Gathering logs for kube-proxy [342bc0852b67] ...
	I0803 18:09:08.761444    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 342bc0852b67"
	I0803 18:09:08.773734    4630 logs.go:123] Gathering logs for kube-controller-manager [80d0b706be3f] ...
	I0803 18:09:08.773745    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d0b706be3f"
	I0803 18:09:08.791734    4630 logs.go:123] Gathering logs for storage-provisioner [f6fd4b2b4472] ...
	I0803 18:09:08.791745    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6fd4b2b4472"
	I0803 18:09:08.803366    4630 logs.go:123] Gathering logs for container status ...
	I0803 18:09:08.803376    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:09:08.815077    4630 logs.go:123] Gathering logs for kubelet ...
	I0803 18:09:08.815088    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 18:09:08.850710    4630 logs.go:123] Gathering logs for kube-apiserver [77262126dc60] ...
	I0803 18:09:08.850722    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77262126dc60"
	I0803 18:09:08.865265    4630 logs.go:123] Gathering logs for etcd [983bbf4c86df] ...
	I0803 18:09:08.865273    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 983bbf4c86df"
	I0803 18:09:11.385581    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:09:16.387838    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:09:16.388056    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:09:16.406743    4630 logs.go:276] 1 containers: [77262126dc60]
	I0803 18:09:16.406833    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:09:16.419569    4630 logs.go:276] 1 containers: [983bbf4c86df]
	I0803 18:09:16.419637    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:09:16.430247    4630 logs.go:276] 2 containers: [b84f35bdcbf8 c2611cb1b266]
	I0803 18:09:16.430314    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:09:16.440596    4630 logs.go:276] 1 containers: [2cffae83c168]
	I0803 18:09:16.440662    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:09:16.450913    4630 logs.go:276] 1 containers: [342bc0852b67]
	I0803 18:09:16.450980    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:09:16.461333    4630 logs.go:276] 1 containers: [80d0b706be3f]
	I0803 18:09:16.461402    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:09:16.471815    4630 logs.go:276] 0 containers: []
	W0803 18:09:16.471827    4630 logs.go:278] No container was found matching "kindnet"
	I0803 18:09:16.471886    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:09:16.482376    4630 logs.go:276] 1 containers: [f6fd4b2b4472]
	I0803 18:09:16.482391    4630 logs.go:123] Gathering logs for kubelet ...
	I0803 18:09:16.482396    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 18:09:16.516892    4630 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:09:16.516900    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:09:16.550785    4630 logs.go:123] Gathering logs for etcd [983bbf4c86df] ...
	I0803 18:09:16.550797    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 983bbf4c86df"
	I0803 18:09:16.564661    4630 logs.go:123] Gathering logs for coredns [c2611cb1b266] ...
	I0803 18:09:16.564672    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2611cb1b266"
	I0803 18:09:16.575941    4630 logs.go:123] Gathering logs for kube-proxy [342bc0852b67] ...
	I0803 18:09:16.575953    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 342bc0852b67"
	I0803 18:09:16.587400    4630 logs.go:123] Gathering logs for storage-provisioner [f6fd4b2b4472] ...
	I0803 18:09:16.587411    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6fd4b2b4472"
	I0803 18:09:16.604258    4630 logs.go:123] Gathering logs for container status ...
	I0803 18:09:16.604271    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:09:16.615709    4630 logs.go:123] Gathering logs for dmesg ...
	I0803 18:09:16.615721    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:09:16.620091    4630 logs.go:123] Gathering logs for kube-apiserver [77262126dc60] ...
	I0803 18:09:16.620100    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77262126dc60"
	I0803 18:09:16.640079    4630 logs.go:123] Gathering logs for coredns [b84f35bdcbf8] ...
	I0803 18:09:16.640091    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b84f35bdcbf8"
	I0803 18:09:16.651039    4630 logs.go:123] Gathering logs for kube-scheduler [2cffae83c168] ...
	I0803 18:09:16.651052    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cffae83c168"
	I0803 18:09:16.667172    4630 logs.go:123] Gathering logs for kube-controller-manager [80d0b706be3f] ...
	I0803 18:09:16.667185    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d0b706be3f"
	I0803 18:09:16.693299    4630 logs.go:123] Gathering logs for Docker ...
	I0803 18:09:16.693309    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:09:19.219735    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:09:24.221649    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:09:24.222110    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:09:24.265683    4630 logs.go:276] 1 containers: [77262126dc60]
	I0803 18:09:24.265795    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:09:24.286709    4630 logs.go:276] 1 containers: [983bbf4c86df]
	I0803 18:09:24.286790    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:09:24.301652    4630 logs.go:276] 2 containers: [b84f35bdcbf8 c2611cb1b266]
	I0803 18:09:24.301726    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:09:24.314182    4630 logs.go:276] 1 containers: [2cffae83c168]
	I0803 18:09:24.314254    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:09:24.325306    4630 logs.go:276] 1 containers: [342bc0852b67]
	I0803 18:09:24.325371    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:09:24.336484    4630 logs.go:276] 1 containers: [80d0b706be3f]
	I0803 18:09:24.336555    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:09:24.347247    4630 logs.go:276] 0 containers: []
	W0803 18:09:24.347258    4630 logs.go:278] No container was found matching "kindnet"
	I0803 18:09:24.347304    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:09:24.358306    4630 logs.go:276] 1 containers: [f6fd4b2b4472]
	I0803 18:09:24.358322    4630 logs.go:123] Gathering logs for dmesg ...
	I0803 18:09:24.358327    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:09:24.363285    4630 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:09:24.363293    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:09:24.405415    4630 logs.go:123] Gathering logs for etcd [983bbf4c86df] ...
	I0803 18:09:24.405429    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 983bbf4c86df"
	I0803 18:09:24.425848    4630 logs.go:123] Gathering logs for coredns [b84f35bdcbf8] ...
	I0803 18:09:24.425861    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b84f35bdcbf8"
	I0803 18:09:24.438030    4630 logs.go:123] Gathering logs for kube-controller-manager [80d0b706be3f] ...
	I0803 18:09:24.438040    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d0b706be3f"
	I0803 18:09:24.456413    4630 logs.go:123] Gathering logs for kubelet ...
	I0803 18:09:24.456423    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 18:09:24.492416    4630 logs.go:123] Gathering logs for kube-apiserver [77262126dc60] ...
	I0803 18:09:24.492423    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77262126dc60"
	I0803 18:09:24.506997    4630 logs.go:123] Gathering logs for coredns [c2611cb1b266] ...
	I0803 18:09:24.507009    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2611cb1b266"
	I0803 18:09:24.519172    4630 logs.go:123] Gathering logs for kube-scheduler [2cffae83c168] ...
	I0803 18:09:24.519183    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cffae83c168"
	I0803 18:09:24.535029    4630 logs.go:123] Gathering logs for kube-proxy [342bc0852b67] ...
	I0803 18:09:24.535039    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 342bc0852b67"
	I0803 18:09:24.547460    4630 logs.go:123] Gathering logs for storage-provisioner [f6fd4b2b4472] ...
	I0803 18:09:24.547473    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6fd4b2b4472"
	I0803 18:09:24.559356    4630 logs.go:123] Gathering logs for Docker ...
	I0803 18:09:24.559366    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:09:24.583119    4630 logs.go:123] Gathering logs for container status ...
	I0803 18:09:24.583128    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:09:27.098374    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:09:32.101066    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:09:32.101419    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:09:32.133979    4630 logs.go:276] 1 containers: [77262126dc60]
	I0803 18:09:32.134097    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:09:32.154203    4630 logs.go:276] 1 containers: [983bbf4c86df]
	I0803 18:09:32.154291    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:09:32.168661    4630 logs.go:276] 2 containers: [b84f35bdcbf8 c2611cb1b266]
	I0803 18:09:32.168725    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:09:32.180752    4630 logs.go:276] 1 containers: [2cffae83c168]
	I0803 18:09:32.180813    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:09:32.191815    4630 logs.go:276] 1 containers: [342bc0852b67]
	I0803 18:09:32.191883    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:09:32.202973    4630 logs.go:276] 1 containers: [80d0b706be3f]
	I0803 18:09:32.203034    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:09:32.213396    4630 logs.go:276] 0 containers: []
	W0803 18:09:32.213407    4630 logs.go:278] No container was found matching "kindnet"
	I0803 18:09:32.213455    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:09:32.224376    4630 logs.go:276] 1 containers: [f6fd4b2b4472]
	I0803 18:09:32.224390    4630 logs.go:123] Gathering logs for kubelet ...
	I0803 18:09:32.224394    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 18:09:32.258212    4630 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:09:32.258223    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:09:32.295489    4630 logs.go:123] Gathering logs for kube-scheduler [2cffae83c168] ...
	I0803 18:09:32.295500    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cffae83c168"
	I0803 18:09:32.311417    4630 logs.go:123] Gathering logs for kube-proxy [342bc0852b67] ...
	I0803 18:09:32.311428    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 342bc0852b67"
	I0803 18:09:32.323611    4630 logs.go:123] Gathering logs for storage-provisioner [f6fd4b2b4472] ...
	I0803 18:09:32.323621    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6fd4b2b4472"
	I0803 18:09:32.335983    4630 logs.go:123] Gathering logs for container status ...
	I0803 18:09:32.335995    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:09:32.348412    4630 logs.go:123] Gathering logs for dmesg ...
	I0803 18:09:32.348421    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:09:32.352685    4630 logs.go:123] Gathering logs for kube-apiserver [77262126dc60] ...
	I0803 18:09:32.352693    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77262126dc60"
	I0803 18:09:32.367295    4630 logs.go:123] Gathering logs for etcd [983bbf4c86df] ...
	I0803 18:09:32.367305    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 983bbf4c86df"
	I0803 18:09:32.382257    4630 logs.go:123] Gathering logs for coredns [b84f35bdcbf8] ...
	I0803 18:09:32.382269    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b84f35bdcbf8"
	I0803 18:09:32.394171    4630 logs.go:123] Gathering logs for coredns [c2611cb1b266] ...
	I0803 18:09:32.394182    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2611cb1b266"
	I0803 18:09:32.406512    4630 logs.go:123] Gathering logs for kube-controller-manager [80d0b706be3f] ...
	I0803 18:09:32.406522    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d0b706be3f"
	I0803 18:09:32.424196    4630 logs.go:123] Gathering logs for Docker ...
	I0803 18:09:32.424209    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:09:34.949068    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:09:39.951376    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:09:39.951815    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:09:39.990202    4630 logs.go:276] 1 containers: [77262126dc60]
	I0803 18:09:39.990331    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:09:40.012266    4630 logs.go:276] 1 containers: [983bbf4c86df]
	I0803 18:09:40.012378    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:09:40.028137    4630 logs.go:276] 2 containers: [b84f35bdcbf8 c2611cb1b266]
	I0803 18:09:40.028206    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:09:40.040609    4630 logs.go:276] 1 containers: [2cffae83c168]
	I0803 18:09:40.040677    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:09:40.052601    4630 logs.go:276] 1 containers: [342bc0852b67]
	I0803 18:09:40.052671    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:09:40.063570    4630 logs.go:276] 1 containers: [80d0b706be3f]
	I0803 18:09:40.063641    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:09:40.092927    4630 logs.go:276] 0 containers: []
	W0803 18:09:40.092940    4630 logs.go:278] No container was found matching "kindnet"
	I0803 18:09:40.092996    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:09:40.104218    4630 logs.go:276] 1 containers: [f6fd4b2b4472]
	I0803 18:09:40.104234    4630 logs.go:123] Gathering logs for Docker ...
	I0803 18:09:40.104239    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:09:40.128606    4630 logs.go:123] Gathering logs for container status ...
	I0803 18:09:40.128614    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:09:40.140678    4630 logs.go:123] Gathering logs for coredns [b84f35bdcbf8] ...
	I0803 18:09:40.140692    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b84f35bdcbf8"
	I0803 18:09:40.156138    4630 logs.go:123] Gathering logs for storage-provisioner [f6fd4b2b4472] ...
	I0803 18:09:40.156151    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6fd4b2b4472"
	I0803 18:09:40.168232    4630 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:09:40.168244    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:09:40.203931    4630 logs.go:123] Gathering logs for kube-apiserver [77262126dc60] ...
	I0803 18:09:40.203942    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77262126dc60"
	I0803 18:09:40.222800    4630 logs.go:123] Gathering logs for etcd [983bbf4c86df] ...
	I0803 18:09:40.222810    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 983bbf4c86df"
	I0803 18:09:40.242604    4630 logs.go:123] Gathering logs for coredns [c2611cb1b266] ...
	I0803 18:09:40.242617    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2611cb1b266"
	I0803 18:09:40.255386    4630 logs.go:123] Gathering logs for kube-scheduler [2cffae83c168] ...
	I0803 18:09:40.255396    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cffae83c168"
	I0803 18:09:40.271069    4630 logs.go:123] Gathering logs for kube-proxy [342bc0852b67] ...
	I0803 18:09:40.271080    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 342bc0852b67"
	I0803 18:09:40.283274    4630 logs.go:123] Gathering logs for kubelet ...
	I0803 18:09:40.283288    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 18:09:40.319225    4630 logs.go:123] Gathering logs for dmesg ...
	I0803 18:09:40.319232    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:09:40.323674    4630 logs.go:123] Gathering logs for kube-controller-manager [80d0b706be3f] ...
	I0803 18:09:40.323680    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d0b706be3f"
	I0803 18:09:42.844366    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:09:47.846679    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:09:47.847091    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:09:47.891477    4630 logs.go:276] 1 containers: [77262126dc60]
	I0803 18:09:47.891616    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:09:47.912529    4630 logs.go:276] 1 containers: [983bbf4c86df]
	I0803 18:09:47.912615    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:09:47.927417    4630 logs.go:276] 2 containers: [b84f35bdcbf8 c2611cb1b266]
	I0803 18:09:47.927494    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:09:47.941073    4630 logs.go:276] 1 containers: [2cffae83c168]
	I0803 18:09:47.941147    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:09:47.952433    4630 logs.go:276] 1 containers: [342bc0852b67]
	I0803 18:09:47.952504    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:09:47.963913    4630 logs.go:276] 1 containers: [80d0b706be3f]
	I0803 18:09:47.963982    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:09:47.975094    4630 logs.go:276] 0 containers: []
	W0803 18:09:47.975105    4630 logs.go:278] No container was found matching "kindnet"
	I0803 18:09:47.975157    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:09:47.986321    4630 logs.go:276] 1 containers: [f6fd4b2b4472]
	I0803 18:09:47.986336    4630 logs.go:123] Gathering logs for kubelet ...
	I0803 18:09:47.986343    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 18:09:48.020419    4630 logs.go:123] Gathering logs for dmesg ...
	I0803 18:09:48.020430    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:09:48.024549    4630 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:09:48.024557    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:09:48.062538    4630 logs.go:123] Gathering logs for etcd [983bbf4c86df] ...
	I0803 18:09:48.062552    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 983bbf4c86df"
	I0803 18:09:48.078478    4630 logs.go:123] Gathering logs for coredns [b84f35bdcbf8] ...
	I0803 18:09:48.078491    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b84f35bdcbf8"
	I0803 18:09:48.090883    4630 logs.go:123] Gathering logs for coredns [c2611cb1b266] ...
	I0803 18:09:48.090895    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2611cb1b266"
	I0803 18:09:48.103852    4630 logs.go:123] Gathering logs for kube-scheduler [2cffae83c168] ...
	I0803 18:09:48.103867    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cffae83c168"
	I0803 18:09:48.118972    4630 logs.go:123] Gathering logs for kube-proxy [342bc0852b67] ...
	I0803 18:09:48.118983    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 342bc0852b67"
	I0803 18:09:48.130802    4630 logs.go:123] Gathering logs for storage-provisioner [f6fd4b2b4472] ...
	I0803 18:09:48.130817    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6fd4b2b4472"
	I0803 18:09:48.142463    4630 logs.go:123] Gathering logs for Docker ...
	I0803 18:09:48.142474    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:09:48.166090    4630 logs.go:123] Gathering logs for kube-apiserver [77262126dc60] ...
	I0803 18:09:48.166096    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77262126dc60"
	I0803 18:09:48.180578    4630 logs.go:123] Gathering logs for kube-controller-manager [80d0b706be3f] ...
	I0803 18:09:48.180589    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d0b706be3f"
	I0803 18:09:48.198885    4630 logs.go:123] Gathering logs for container status ...
	I0803 18:09:48.198895    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:09:50.712576    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:09:55.714373    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:09:55.714657    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:09:55.748486    4630 logs.go:276] 1 containers: [77262126dc60]
	I0803 18:09:55.748624    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:09:55.772989    4630 logs.go:276] 1 containers: [983bbf4c86df]
	I0803 18:09:55.773082    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:09:55.790541    4630 logs.go:276] 2 containers: [b84f35bdcbf8 c2611cb1b266]
	I0803 18:09:55.790612    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:09:55.803339    4630 logs.go:276] 1 containers: [2cffae83c168]
	I0803 18:09:55.803415    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:09:55.814600    4630 logs.go:276] 1 containers: [342bc0852b67]
	I0803 18:09:55.814668    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:09:55.825463    4630 logs.go:276] 1 containers: [80d0b706be3f]
	I0803 18:09:55.825534    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:09:55.840633    4630 logs.go:276] 0 containers: []
	W0803 18:09:55.840645    4630 logs.go:278] No container was found matching "kindnet"
	I0803 18:09:55.840707    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:09:55.851107    4630 logs.go:276] 1 containers: [f6fd4b2b4472]
	I0803 18:09:55.851123    4630 logs.go:123] Gathering logs for storage-provisioner [f6fd4b2b4472] ...
	I0803 18:09:55.851129    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6fd4b2b4472"
	I0803 18:09:55.863045    4630 logs.go:123] Gathering logs for kubelet ...
	I0803 18:09:55.863055    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 18:09:55.898168    4630 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:09:55.898179    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:09:55.933811    4630 logs.go:123] Gathering logs for kube-apiserver [77262126dc60] ...
	I0803 18:09:55.933824    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77262126dc60"
	I0803 18:09:55.948500    4630 logs.go:123] Gathering logs for coredns [b84f35bdcbf8] ...
	I0803 18:09:55.948509    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b84f35bdcbf8"
	I0803 18:09:55.960250    4630 logs.go:123] Gathering logs for kube-scheduler [2cffae83c168] ...
	I0803 18:09:55.960263    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cffae83c168"
	I0803 18:09:55.975331    4630 logs.go:123] Gathering logs for kube-controller-manager [80d0b706be3f] ...
	I0803 18:09:55.975344    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d0b706be3f"
	I0803 18:09:55.999983    4630 logs.go:123] Gathering logs for dmesg ...
	I0803 18:09:55.999995    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:09:56.004274    4630 logs.go:123] Gathering logs for etcd [983bbf4c86df] ...
	I0803 18:09:56.004280    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 983bbf4c86df"
	I0803 18:09:56.017957    4630 logs.go:123] Gathering logs for coredns [c2611cb1b266] ...
	I0803 18:09:56.017967    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2611cb1b266"
	I0803 18:09:56.037061    4630 logs.go:123] Gathering logs for kube-proxy [342bc0852b67] ...
	I0803 18:09:56.037070    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 342bc0852b67"
	I0803 18:09:56.048598    4630 logs.go:123] Gathering logs for Docker ...
	I0803 18:09:56.048608    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:09:56.071928    4630 logs.go:123] Gathering logs for container status ...
	I0803 18:09:56.071941    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:09:58.583788    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:10:03.586511    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:10:03.586873    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:10:03.629607    4630 logs.go:276] 1 containers: [77262126dc60]
	I0803 18:10:03.629726    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:10:03.655425    4630 logs.go:276] 1 containers: [983bbf4c86df]
	I0803 18:10:03.655501    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:10:03.670521    4630 logs.go:276] 2 containers: [b84f35bdcbf8 c2611cb1b266]
	I0803 18:10:03.670589    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:10:03.681477    4630 logs.go:276] 1 containers: [2cffae83c168]
	I0803 18:10:03.681549    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:10:03.692355    4630 logs.go:276] 1 containers: [342bc0852b67]
	I0803 18:10:03.692427    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:10:03.703057    4630 logs.go:276] 1 containers: [80d0b706be3f]
	I0803 18:10:03.703118    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:10:03.713928    4630 logs.go:276] 0 containers: []
	W0803 18:10:03.713938    4630 logs.go:278] No container was found matching "kindnet"
	I0803 18:10:03.713997    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:10:03.724344    4630 logs.go:276] 1 containers: [f6fd4b2b4472]
	I0803 18:10:03.724358    4630 logs.go:123] Gathering logs for kubelet ...
	I0803 18:10:03.724364    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 18:10:03.757792    4630 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:10:03.757800    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:10:03.792611    4630 logs.go:123] Gathering logs for Docker ...
	I0803 18:10:03.792625    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:10:03.815586    4630 logs.go:123] Gathering logs for coredns [b84f35bdcbf8] ...
	I0803 18:10:03.815593    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b84f35bdcbf8"
	I0803 18:10:03.827756    4630 logs.go:123] Gathering logs for coredns [c2611cb1b266] ...
	I0803 18:10:03.827764    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2611cb1b266"
	I0803 18:10:03.839256    4630 logs.go:123] Gathering logs for kube-scheduler [2cffae83c168] ...
	I0803 18:10:03.839271    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cffae83c168"
	I0803 18:10:03.854091    4630 logs.go:123] Gathering logs for kube-proxy [342bc0852b67] ...
	I0803 18:10:03.854104    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 342bc0852b67"
	I0803 18:10:03.874032    4630 logs.go:123] Gathering logs for kube-controller-manager [80d0b706be3f] ...
	I0803 18:10:03.874042    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d0b706be3f"
	I0803 18:10:03.891767    4630 logs.go:123] Gathering logs for dmesg ...
	I0803 18:10:03.891776    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:10:03.901038    4630 logs.go:123] Gathering logs for kube-apiserver [77262126dc60] ...
	I0803 18:10:03.901047    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77262126dc60"
	I0803 18:10:03.927827    4630 logs.go:123] Gathering logs for etcd [983bbf4c86df] ...
	I0803 18:10:03.927840    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 983bbf4c86df"
	I0803 18:10:03.941628    4630 logs.go:123] Gathering logs for storage-provisioner [f6fd4b2b4472] ...
	I0803 18:10:03.941640    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6fd4b2b4472"
	I0803 18:10:03.953030    4630 logs.go:123] Gathering logs for container status ...
	I0803 18:10:03.953041    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:10:06.469889    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:10:11.472544    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:10:11.473060    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:10:11.511258    4630 logs.go:276] 1 containers: [77262126dc60]
	I0803 18:10:11.511400    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:10:11.533124    4630 logs.go:276] 1 containers: [983bbf4c86df]
	I0803 18:10:11.533223    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:10:11.548244    4630 logs.go:276] 2 containers: [b84f35bdcbf8 c2611cb1b266]
	I0803 18:10:11.548322    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:10:11.560432    4630 logs.go:276] 1 containers: [2cffae83c168]
	I0803 18:10:11.560504    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:10:11.572643    4630 logs.go:276] 1 containers: [342bc0852b67]
	I0803 18:10:11.572712    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:10:11.583202    4630 logs.go:276] 1 containers: [80d0b706be3f]
	I0803 18:10:11.583271    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:10:11.593905    4630 logs.go:276] 0 containers: []
	W0803 18:10:11.593915    4630 logs.go:278] No container was found matching "kindnet"
	I0803 18:10:11.593971    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:10:11.604526    4630 logs.go:276] 1 containers: [f6fd4b2b4472]
	I0803 18:10:11.604541    4630 logs.go:123] Gathering logs for kube-controller-manager [80d0b706be3f] ...
	I0803 18:10:11.604545    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d0b706be3f"
	I0803 18:10:11.622447    4630 logs.go:123] Gathering logs for storage-provisioner [f6fd4b2b4472] ...
	I0803 18:10:11.622460    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6fd4b2b4472"
	I0803 18:10:11.634483    4630 logs.go:123] Gathering logs for container status ...
	I0803 18:10:11.634496    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:10:11.646602    4630 logs.go:123] Gathering logs for dmesg ...
	I0803 18:10:11.646612    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:10:11.651281    4630 logs.go:123] Gathering logs for etcd [983bbf4c86df] ...
	I0803 18:10:11.651289    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 983bbf4c86df"
	I0803 18:10:11.665277    4630 logs.go:123] Gathering logs for kube-scheduler [2cffae83c168] ...
	I0803 18:10:11.665287    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cffae83c168"
	I0803 18:10:11.683858    4630 logs.go:123] Gathering logs for kube-proxy [342bc0852b67] ...
	I0803 18:10:11.683868    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 342bc0852b67"
	I0803 18:10:11.695919    4630 logs.go:123] Gathering logs for coredns [c2611cb1b266] ...
	I0803 18:10:11.695932    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2611cb1b266"
	I0803 18:10:11.711738    4630 logs.go:123] Gathering logs for Docker ...
	I0803 18:10:11.711750    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:10:11.734889    4630 logs.go:123] Gathering logs for kubelet ...
	I0803 18:10:11.734896    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 18:10:11.769063    4630 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:10:11.769070    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:10:11.803548    4630 logs.go:123] Gathering logs for kube-apiserver [77262126dc60] ...
	I0803 18:10:11.803558    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77262126dc60"
	I0803 18:10:11.818110    4630 logs.go:123] Gathering logs for coredns [b84f35bdcbf8] ...
	I0803 18:10:11.818120    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b84f35bdcbf8"
	I0803 18:10:14.331464    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:10:19.334160    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:10:19.334626    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:10:19.381741    4630 logs.go:276] 1 containers: [77262126dc60]
	I0803 18:10:19.381868    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:10:19.401395    4630 logs.go:276] 1 containers: [983bbf4c86df]
	I0803 18:10:19.401483    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:10:19.416161    4630 logs.go:276] 2 containers: [b84f35bdcbf8 c2611cb1b266]
	I0803 18:10:19.416256    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:10:19.436544    4630 logs.go:276] 1 containers: [2cffae83c168]
	I0803 18:10:19.436617    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:10:19.446948    4630 logs.go:276] 1 containers: [342bc0852b67]
	I0803 18:10:19.447016    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:10:19.459299    4630 logs.go:276] 1 containers: [80d0b706be3f]
	I0803 18:10:19.459364    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:10:19.471675    4630 logs.go:276] 0 containers: []
	W0803 18:10:19.471686    4630 logs.go:278] No container was found matching "kindnet"
	I0803 18:10:19.471743    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:10:19.481784    4630 logs.go:276] 1 containers: [f6fd4b2b4472]
	I0803 18:10:19.481800    4630 logs.go:123] Gathering logs for dmesg ...
	I0803 18:10:19.481806    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:10:19.486739    4630 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:10:19.486749    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:10:19.521341    4630 logs.go:123] Gathering logs for kube-apiserver [77262126dc60] ...
	I0803 18:10:19.521353    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77262126dc60"
	I0803 18:10:19.539599    4630 logs.go:123] Gathering logs for etcd [983bbf4c86df] ...
	I0803 18:10:19.539610    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 983bbf4c86df"
	I0803 18:10:19.553505    4630 logs.go:123] Gathering logs for coredns [c2611cb1b266] ...
	I0803 18:10:19.553518    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2611cb1b266"
	I0803 18:10:19.565290    4630 logs.go:123] Gathering logs for container status ...
	I0803 18:10:19.565301    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:10:19.576813    4630 logs.go:123] Gathering logs for kubelet ...
	I0803 18:10:19.576827    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 18:10:19.609973    4630 logs.go:123] Gathering logs for coredns [b84f35bdcbf8] ...
	I0803 18:10:19.609980    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b84f35bdcbf8"
	I0803 18:10:19.645816    4630 logs.go:123] Gathering logs for kube-scheduler [2cffae83c168] ...
	I0803 18:10:19.645826    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cffae83c168"
	I0803 18:10:19.679238    4630 logs.go:123] Gathering logs for kube-proxy [342bc0852b67] ...
	I0803 18:10:19.679250    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 342bc0852b67"
	I0803 18:10:19.706975    4630 logs.go:123] Gathering logs for kube-controller-manager [80d0b706be3f] ...
	I0803 18:10:19.706990    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d0b706be3f"
	I0803 18:10:19.733483    4630 logs.go:123] Gathering logs for storage-provisioner [f6fd4b2b4472] ...
	I0803 18:10:19.733495    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6fd4b2b4472"
	I0803 18:10:19.761600    4630 logs.go:123] Gathering logs for Docker ...
	I0803 18:10:19.761613    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:10:22.288837    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:10:27.290957    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:10:27.291394    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:10:27.341438    4630 logs.go:276] 1 containers: [77262126dc60]
	I0803 18:10:27.341541    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:10:27.361638    4630 logs.go:276] 1 containers: [983bbf4c86df]
	I0803 18:10:27.361730    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:10:27.375040    4630 logs.go:276] 4 containers: [b424f14f4cfc 8722dbe9ce68 b84f35bdcbf8 c2611cb1b266]
	I0803 18:10:27.375113    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:10:27.386536    4630 logs.go:276] 1 containers: [2cffae83c168]
	I0803 18:10:27.386599    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:10:27.397411    4630 logs.go:276] 1 containers: [342bc0852b67]
	I0803 18:10:27.397470    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:10:27.412923    4630 logs.go:276] 1 containers: [80d0b706be3f]
	I0803 18:10:27.412989    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:10:27.425229    4630 logs.go:276] 0 containers: []
	W0803 18:10:27.425239    4630 logs.go:278] No container was found matching "kindnet"
	I0803 18:10:27.425291    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:10:27.435693    4630 logs.go:276] 1 containers: [f6fd4b2b4472]
	I0803 18:10:27.435712    4630 logs.go:123] Gathering logs for coredns [b84f35bdcbf8] ...
	I0803 18:10:27.435718    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b84f35bdcbf8"
	I0803 18:10:27.447546    4630 logs.go:123] Gathering logs for kube-scheduler [2cffae83c168] ...
	I0803 18:10:27.447558    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cffae83c168"
	I0803 18:10:27.463259    4630 logs.go:123] Gathering logs for kube-controller-manager [80d0b706be3f] ...
	I0803 18:10:27.463271    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d0b706be3f"
	I0803 18:10:27.484798    4630 logs.go:123] Gathering logs for dmesg ...
	I0803 18:10:27.484808    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:10:27.489027    4630 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:10:27.489034    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:10:27.524438    4630 logs.go:123] Gathering logs for coredns [b424f14f4cfc] ...
	I0803 18:10:27.524449    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b424f14f4cfc"
	I0803 18:10:27.537554    4630 logs.go:123] Gathering logs for coredns [8722dbe9ce68] ...
	I0803 18:10:27.537565    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8722dbe9ce68"
	I0803 18:10:27.549070    4630 logs.go:123] Gathering logs for kubelet ...
	I0803 18:10:27.549081    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 18:10:27.585352    4630 logs.go:123] Gathering logs for kube-proxy [342bc0852b67] ...
	I0803 18:10:27.585362    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 342bc0852b67"
	I0803 18:10:27.597942    4630 logs.go:123] Gathering logs for storage-provisioner [f6fd4b2b4472] ...
	I0803 18:10:27.597954    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6fd4b2b4472"
	I0803 18:10:27.609698    4630 logs.go:123] Gathering logs for etcd [983bbf4c86df] ...
	I0803 18:10:27.609707    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 983bbf4c86df"
	I0803 18:10:27.630738    4630 logs.go:123] Gathering logs for coredns [c2611cb1b266] ...
	I0803 18:10:27.630751    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2611cb1b266"
	I0803 18:10:27.642704    4630 logs.go:123] Gathering logs for Docker ...
	I0803 18:10:27.642716    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:10:27.667495    4630 logs.go:123] Gathering logs for container status ...
	I0803 18:10:27.667505    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:10:27.678827    4630 logs.go:123] Gathering logs for kube-apiserver [77262126dc60] ...
	I0803 18:10:27.678841    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77262126dc60"
	I0803 18:10:30.195203    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:10:35.197420    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:10:35.197836    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:10:35.238208    4630 logs.go:276] 1 containers: [77262126dc60]
	I0803 18:10:35.238337    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:10:35.260506    4630 logs.go:276] 1 containers: [983bbf4c86df]
	I0803 18:10:35.260595    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:10:35.280549    4630 logs.go:276] 4 containers: [b424f14f4cfc 8722dbe9ce68 b84f35bdcbf8 c2611cb1b266]
	I0803 18:10:35.280626    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:10:35.292585    4630 logs.go:276] 1 containers: [2cffae83c168]
	I0803 18:10:35.292657    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:10:35.304045    4630 logs.go:276] 1 containers: [342bc0852b67]
	I0803 18:10:35.304116    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:10:35.314685    4630 logs.go:276] 1 containers: [80d0b706be3f]
	I0803 18:10:35.314751    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:10:35.324963    4630 logs.go:276] 0 containers: []
	W0803 18:10:35.324973    4630 logs.go:278] No container was found matching "kindnet"
	I0803 18:10:35.325029    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:10:35.335621    4630 logs.go:276] 1 containers: [f6fd4b2b4472]
	I0803 18:10:35.335635    4630 logs.go:123] Gathering logs for kube-apiserver [77262126dc60] ...
	I0803 18:10:35.335639    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77262126dc60"
	I0803 18:10:35.353658    4630 logs.go:123] Gathering logs for etcd [983bbf4c86df] ...
	I0803 18:10:35.353669    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 983bbf4c86df"
	I0803 18:10:35.374656    4630 logs.go:123] Gathering logs for coredns [8722dbe9ce68] ...
	I0803 18:10:35.374668    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8722dbe9ce68"
	I0803 18:10:35.388186    4630 logs.go:123] Gathering logs for kube-scheduler [2cffae83c168] ...
	I0803 18:10:35.388199    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cffae83c168"
	I0803 18:10:35.405902    4630 logs.go:123] Gathering logs for kube-proxy [342bc0852b67] ...
	I0803 18:10:35.405912    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 342bc0852b67"
	I0803 18:10:35.419643    4630 logs.go:123] Gathering logs for dmesg ...
	I0803 18:10:35.419652    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:10:35.424211    4630 logs.go:123] Gathering logs for kube-controller-manager [80d0b706be3f] ...
	I0803 18:10:35.424218    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d0b706be3f"
	I0803 18:10:35.442142    4630 logs.go:123] Gathering logs for container status ...
	I0803 18:10:35.442153    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:10:35.454016    4630 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:10:35.454028    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:10:35.493594    4630 logs.go:123] Gathering logs for storage-provisioner [f6fd4b2b4472] ...
	I0803 18:10:35.493605    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6fd4b2b4472"
	I0803 18:10:35.505569    4630 logs.go:123] Gathering logs for coredns [c2611cb1b266] ...
	I0803 18:10:35.505579    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2611cb1b266"
	I0803 18:10:35.517255    4630 logs.go:123] Gathering logs for coredns [b424f14f4cfc] ...
	I0803 18:10:35.517267    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b424f14f4cfc"
	I0803 18:10:35.528468    4630 logs.go:123] Gathering logs for coredns [b84f35bdcbf8] ...
	I0803 18:10:35.528478    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b84f35bdcbf8"
	I0803 18:10:35.539846    4630 logs.go:123] Gathering logs for Docker ...
	I0803 18:10:35.539859    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:10:35.564638    4630 logs.go:123] Gathering logs for kubelet ...
	I0803 18:10:35.564644    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 18:10:38.101803    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:10:43.103306    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:10:43.103757    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:10:43.144791    4630 logs.go:276] 1 containers: [77262126dc60]
	I0803 18:10:43.144913    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:10:43.167281    4630 logs.go:276] 1 containers: [983bbf4c86df]
	I0803 18:10:43.167399    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:10:43.184955    4630 logs.go:276] 4 containers: [b424f14f4cfc 8722dbe9ce68 b84f35bdcbf8 c2611cb1b266]
	I0803 18:10:43.185030    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:10:43.197139    4630 logs.go:276] 1 containers: [2cffae83c168]
	I0803 18:10:43.197210    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:10:43.211703    4630 logs.go:276] 1 containers: [342bc0852b67]
	I0803 18:10:43.211771    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:10:43.222800    4630 logs.go:276] 1 containers: [80d0b706be3f]
	I0803 18:10:43.222865    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:10:43.233375    4630 logs.go:276] 0 containers: []
	W0803 18:10:43.233388    4630 logs.go:278] No container was found matching "kindnet"
	I0803 18:10:43.233445    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:10:43.248212    4630 logs.go:276] 1 containers: [f6fd4b2b4472]
	I0803 18:10:43.248229    4630 logs.go:123] Gathering logs for coredns [b424f14f4cfc] ...
	I0803 18:10:43.248234    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b424f14f4cfc"
	I0803 18:10:43.259939    4630 logs.go:123] Gathering logs for container status ...
	I0803 18:10:43.259952    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:10:43.271574    4630 logs.go:123] Gathering logs for kubelet ...
	I0803 18:10:43.271586    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 18:10:43.306996    4630 logs.go:123] Gathering logs for coredns [b84f35bdcbf8] ...
	I0803 18:10:43.307005    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b84f35bdcbf8"
	I0803 18:10:43.319610    4630 logs.go:123] Gathering logs for kube-controller-manager [80d0b706be3f] ...
	I0803 18:10:43.319623    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d0b706be3f"
	I0803 18:10:43.337816    4630 logs.go:123] Gathering logs for kube-scheduler [2cffae83c168] ...
	I0803 18:10:43.337827    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cffae83c168"
	I0803 18:10:43.352973    4630 logs.go:123] Gathering logs for kube-proxy [342bc0852b67] ...
	I0803 18:10:43.352982    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 342bc0852b67"
	I0803 18:10:43.364472    4630 logs.go:123] Gathering logs for dmesg ...
	I0803 18:10:43.364487    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:10:43.369076    4630 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:10:43.369083    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:10:43.403019    4630 logs.go:123] Gathering logs for kube-apiserver [77262126dc60] ...
	I0803 18:10:43.403034    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77262126dc60"
	I0803 18:10:43.417447    4630 logs.go:123] Gathering logs for etcd [983bbf4c86df] ...
	I0803 18:10:43.417457    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 983bbf4c86df"
	I0803 18:10:43.435524    4630 logs.go:123] Gathering logs for coredns [8722dbe9ce68] ...
	I0803 18:10:43.435538    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8722dbe9ce68"
	I0803 18:10:43.446808    4630 logs.go:123] Gathering logs for coredns [c2611cb1b266] ...
	I0803 18:10:43.446820    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2611cb1b266"
	I0803 18:10:43.461821    4630 logs.go:123] Gathering logs for storage-provisioner [f6fd4b2b4472] ...
	I0803 18:10:43.461831    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6fd4b2b4472"
	I0803 18:10:43.478260    4630 logs.go:123] Gathering logs for Docker ...
	I0803 18:10:43.478270    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:10:46.005500    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:10:51.005729    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:10:51.006257    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:10:51.046680    4630 logs.go:276] 1 containers: [77262126dc60]
	I0803 18:10:51.046798    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:10:51.067946    4630 logs.go:276] 1 containers: [983bbf4c86df]
	I0803 18:10:51.068091    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:10:51.086804    4630 logs.go:276] 4 containers: [b424f14f4cfc 8722dbe9ce68 b84f35bdcbf8 c2611cb1b266]
	I0803 18:10:51.086888    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:10:51.100238    4630 logs.go:276] 1 containers: [2cffae83c168]
	I0803 18:10:51.100304    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:10:51.110995    4630 logs.go:276] 1 containers: [342bc0852b67]
	I0803 18:10:51.111062    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:10:51.122698    4630 logs.go:276] 1 containers: [80d0b706be3f]
	I0803 18:10:51.122766    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:10:51.133143    4630 logs.go:276] 0 containers: []
	W0803 18:10:51.133155    4630 logs.go:278] No container was found matching "kindnet"
	I0803 18:10:51.133223    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:10:51.144553    4630 logs.go:276] 1 containers: [f6fd4b2b4472]
	I0803 18:10:51.144572    4630 logs.go:123] Gathering logs for etcd [983bbf4c86df] ...
	I0803 18:10:51.144577    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 983bbf4c86df"
	I0803 18:10:51.163206    4630 logs.go:123] Gathering logs for kube-proxy [342bc0852b67] ...
	I0803 18:10:51.163215    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 342bc0852b67"
	I0803 18:10:51.175240    4630 logs.go:123] Gathering logs for storage-provisioner [f6fd4b2b4472] ...
	I0803 18:10:51.175253    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6fd4b2b4472"
	I0803 18:10:51.187094    4630 logs.go:123] Gathering logs for kube-apiserver [77262126dc60] ...
	I0803 18:10:51.187107    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77262126dc60"
	I0803 18:10:51.201273    4630 logs.go:123] Gathering logs for coredns [8722dbe9ce68] ...
	I0803 18:10:51.201283    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8722dbe9ce68"
	I0803 18:10:51.212676    4630 logs.go:123] Gathering logs for coredns [b84f35bdcbf8] ...
	I0803 18:10:51.212684    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b84f35bdcbf8"
	I0803 18:10:51.224216    4630 logs.go:123] Gathering logs for coredns [c2611cb1b266] ...
	I0803 18:10:51.224227    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2611cb1b266"
	I0803 18:10:51.235843    4630 logs.go:123] Gathering logs for dmesg ...
	I0803 18:10:51.235853    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:10:51.240367    4630 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:10:51.240374    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:10:51.274585    4630 logs.go:123] Gathering logs for coredns [b424f14f4cfc] ...
	I0803 18:10:51.274597    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b424f14f4cfc"
	I0803 18:10:51.286819    4630 logs.go:123] Gathering logs for Docker ...
	I0803 18:10:51.286828    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:10:51.311661    4630 logs.go:123] Gathering logs for container status ...
	I0803 18:10:51.311672    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:10:51.323144    4630 logs.go:123] Gathering logs for kubelet ...
	I0803 18:10:51.323156    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 18:10:51.357002    4630 logs.go:123] Gathering logs for kube-scheduler [2cffae83c168] ...
	I0803 18:10:51.357019    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cffae83c168"
	I0803 18:10:51.376343    4630 logs.go:123] Gathering logs for kube-controller-manager [80d0b706be3f] ...
	I0803 18:10:51.376355    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d0b706be3f"
	I0803 18:10:53.896366    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:10:58.898389    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:10:58.898467    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:10:58.909952    4630 logs.go:276] 1 containers: [77262126dc60]
	I0803 18:10:58.910025    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:10:58.920771    4630 logs.go:276] 1 containers: [983bbf4c86df]
	I0803 18:10:58.920838    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:10:58.931426    4630 logs.go:276] 4 containers: [b424f14f4cfc 8722dbe9ce68 b84f35bdcbf8 c2611cb1b266]
	I0803 18:10:58.931495    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:10:58.942146    4630 logs.go:276] 1 containers: [2cffae83c168]
	I0803 18:10:58.942205    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:10:58.953352    4630 logs.go:276] 1 containers: [342bc0852b67]
	I0803 18:10:58.953395    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:10:58.964628    4630 logs.go:276] 1 containers: [80d0b706be3f]
	I0803 18:10:58.964693    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:10:58.975090    4630 logs.go:276] 0 containers: []
	W0803 18:10:58.975104    4630 logs.go:278] No container was found matching "kindnet"
	I0803 18:10:58.975152    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:10:58.986124    4630 logs.go:276] 1 containers: [f6fd4b2b4472]
	I0803 18:10:58.986143    4630 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:10:58.986149    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:10:59.022621    4630 logs.go:123] Gathering logs for coredns [c2611cb1b266] ...
	I0803 18:10:59.022632    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2611cb1b266"
	I0803 18:10:59.034632    4630 logs.go:123] Gathering logs for kube-controller-manager [80d0b706be3f] ...
	I0803 18:10:59.034644    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d0b706be3f"
	I0803 18:10:59.051880    4630 logs.go:123] Gathering logs for kubelet ...
	I0803 18:10:59.051893    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 18:10:59.087282    4630 logs.go:123] Gathering logs for kube-apiserver [77262126dc60] ...
	I0803 18:10:59.087290    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77262126dc60"
	I0803 18:10:59.106730    4630 logs.go:123] Gathering logs for coredns [b424f14f4cfc] ...
	I0803 18:10:59.106743    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b424f14f4cfc"
	I0803 18:10:59.118526    4630 logs.go:123] Gathering logs for coredns [b84f35bdcbf8] ...
	I0803 18:10:59.118535    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b84f35bdcbf8"
	I0803 18:10:59.129738    4630 logs.go:123] Gathering logs for Docker ...
	I0803 18:10:59.129748    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:10:59.153608    4630 logs.go:123] Gathering logs for storage-provisioner [f6fd4b2b4472] ...
	I0803 18:10:59.153617    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6fd4b2b4472"
	I0803 18:10:59.165294    4630 logs.go:123] Gathering logs for container status ...
	I0803 18:10:59.165306    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:10:59.176817    4630 logs.go:123] Gathering logs for dmesg ...
	I0803 18:10:59.176831    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:10:59.181025    4630 logs.go:123] Gathering logs for etcd [983bbf4c86df] ...
	I0803 18:10:59.181033    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 983bbf4c86df"
	I0803 18:10:59.195039    4630 logs.go:123] Gathering logs for coredns [8722dbe9ce68] ...
	I0803 18:10:59.195051    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8722dbe9ce68"
	I0803 18:10:59.206545    4630 logs.go:123] Gathering logs for kube-scheduler [2cffae83c168] ...
	I0803 18:10:59.206559    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cffae83c168"
	I0803 18:10:59.221792    4630 logs.go:123] Gathering logs for kube-proxy [342bc0852b67] ...
	I0803 18:10:59.221804    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 342bc0852b67"
	I0803 18:11:01.735367    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:11:06.737958    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:11:06.738173    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:11:06.765212    4630 logs.go:276] 1 containers: [77262126dc60]
	I0803 18:11:06.765330    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:11:06.784230    4630 logs.go:276] 1 containers: [983bbf4c86df]
	I0803 18:11:06.784315    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:11:06.798024    4630 logs.go:276] 4 containers: [b424f14f4cfc 8722dbe9ce68 b84f35bdcbf8 c2611cb1b266]
	I0803 18:11:06.798105    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:11:06.813841    4630 logs.go:276] 1 containers: [2cffae83c168]
	I0803 18:11:06.813908    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:11:06.824575    4630 logs.go:276] 1 containers: [342bc0852b67]
	I0803 18:11:06.824640    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:11:06.835092    4630 logs.go:276] 1 containers: [80d0b706be3f]
	I0803 18:11:06.835155    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:11:06.845907    4630 logs.go:276] 0 containers: []
	W0803 18:11:06.845918    4630 logs.go:278] No container was found matching "kindnet"
	I0803 18:11:06.845974    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:11:06.856167    4630 logs.go:276] 1 containers: [f6fd4b2b4472]
	I0803 18:11:06.856183    4630 logs.go:123] Gathering logs for coredns [8722dbe9ce68] ...
	I0803 18:11:06.856188    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8722dbe9ce68"
	I0803 18:11:06.868109    4630 logs.go:123] Gathering logs for coredns [b84f35bdcbf8] ...
	I0803 18:11:06.868121    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b84f35bdcbf8"
	I0803 18:11:06.879352    4630 logs.go:123] Gathering logs for storage-provisioner [f6fd4b2b4472] ...
	I0803 18:11:06.879364    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6fd4b2b4472"
	I0803 18:11:06.890866    4630 logs.go:123] Gathering logs for container status ...
	I0803 18:11:06.890877    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:11:06.902648    4630 logs.go:123] Gathering logs for kubelet ...
	I0803 18:11:06.902660    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 18:11:06.938826    4630 logs.go:123] Gathering logs for coredns [b424f14f4cfc] ...
	I0803 18:11:06.938837    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b424f14f4cfc"
	I0803 18:11:06.950386    4630 logs.go:123] Gathering logs for dmesg ...
	I0803 18:11:06.950398    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:11:06.954915    4630 logs.go:123] Gathering logs for kube-scheduler [2cffae83c168] ...
	I0803 18:11:06.954924    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cffae83c168"
	I0803 18:11:06.969965    4630 logs.go:123] Gathering logs for Docker ...
	I0803 18:11:06.969978    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:11:06.995350    4630 logs.go:123] Gathering logs for etcd [983bbf4c86df] ...
	I0803 18:11:06.995360    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 983bbf4c86df"
	I0803 18:11:07.016837    4630 logs.go:123] Gathering logs for kube-controller-manager [80d0b706be3f] ...
	I0803 18:11:07.016850    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d0b706be3f"
	I0803 18:11:07.049308    4630 logs.go:123] Gathering logs for coredns [c2611cb1b266] ...
	I0803 18:11:07.049318    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2611cb1b266"
	I0803 18:11:07.061539    4630 logs.go:123] Gathering logs for kube-proxy [342bc0852b67] ...
	I0803 18:11:07.061552    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 342bc0852b67"
	I0803 18:11:07.073236    4630 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:11:07.073249    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:11:07.107450    4630 logs.go:123] Gathering logs for kube-apiserver [77262126dc60] ...
	I0803 18:11:07.107462    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77262126dc60"
	I0803 18:11:09.622386    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:11:14.622761    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:11:14.623213    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:11:14.661905    4630 logs.go:276] 1 containers: [77262126dc60]
	I0803 18:11:14.662034    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:11:14.686156    4630 logs.go:276] 1 containers: [983bbf4c86df]
	I0803 18:11:14.686249    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:11:14.701515    4630 logs.go:276] 4 containers: [b424f14f4cfc 8722dbe9ce68 b84f35bdcbf8 c2611cb1b266]
	I0803 18:11:14.701592    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:11:14.714266    4630 logs.go:276] 1 containers: [2cffae83c168]
	I0803 18:11:14.714331    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:11:14.725156    4630 logs.go:276] 1 containers: [342bc0852b67]
	I0803 18:11:14.725220    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:11:14.735994    4630 logs.go:276] 1 containers: [80d0b706be3f]
	I0803 18:11:14.736057    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:11:14.746263    4630 logs.go:276] 0 containers: []
	W0803 18:11:14.746274    4630 logs.go:278] No container was found matching "kindnet"
	I0803 18:11:14.746331    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:11:14.756926    4630 logs.go:276] 1 containers: [f6fd4b2b4472]
	I0803 18:11:14.756943    4630 logs.go:123] Gathering logs for coredns [8722dbe9ce68] ...
	I0803 18:11:14.756950    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8722dbe9ce68"
	I0803 18:11:14.769243    4630 logs.go:123] Gathering logs for kube-scheduler [2cffae83c168] ...
	I0803 18:11:14.769254    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cffae83c168"
	I0803 18:11:14.790322    4630 logs.go:123] Gathering logs for kube-controller-manager [80d0b706be3f] ...
	I0803 18:11:14.790335    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d0b706be3f"
	I0803 18:11:14.812283    4630 logs.go:123] Gathering logs for kube-apiserver [77262126dc60] ...
	I0803 18:11:14.812293    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77262126dc60"
	I0803 18:11:14.826700    4630 logs.go:123] Gathering logs for coredns [c2611cb1b266] ...
	I0803 18:11:14.826710    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2611cb1b266"
	I0803 18:11:14.838930    4630 logs.go:123] Gathering logs for storage-provisioner [f6fd4b2b4472] ...
	I0803 18:11:14.838940    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6fd4b2b4472"
	I0803 18:11:14.851205    4630 logs.go:123] Gathering logs for Docker ...
	I0803 18:11:14.851216    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:11:14.876291    4630 logs.go:123] Gathering logs for container status ...
	I0803 18:11:14.876298    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:11:14.888496    4630 logs.go:123] Gathering logs for etcd [983bbf4c86df] ...
	I0803 18:11:14.888509    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 983bbf4c86df"
	I0803 18:11:14.903620    4630 logs.go:123] Gathering logs for coredns [b424f14f4cfc] ...
	I0803 18:11:14.903632    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b424f14f4cfc"
	I0803 18:11:14.915656    4630 logs.go:123] Gathering logs for coredns [b84f35bdcbf8] ...
	I0803 18:11:14.915669    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b84f35bdcbf8"
	I0803 18:11:14.927678    4630 logs.go:123] Gathering logs for kube-proxy [342bc0852b67] ...
	I0803 18:11:14.927688    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 342bc0852b67"
	I0803 18:11:14.939114    4630 logs.go:123] Gathering logs for kubelet ...
	I0803 18:11:14.939125    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 18:11:14.972890    4630 logs.go:123] Gathering logs for dmesg ...
	I0803 18:11:14.972897    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:11:14.976997    4630 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:11:14.977006    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:11:17.510831    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:11:22.511331    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:11:22.511396    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:11:22.523837    4630 logs.go:276] 1 containers: [77262126dc60]
	I0803 18:11:22.523882    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:11:22.535529    4630 logs.go:276] 1 containers: [983bbf4c86df]
	I0803 18:11:22.535600    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:11:22.547373    4630 logs.go:276] 4 containers: [b424f14f4cfc 8722dbe9ce68 b84f35bdcbf8 c2611cb1b266]
	I0803 18:11:22.547432    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:11:22.563150    4630 logs.go:276] 1 containers: [2cffae83c168]
	I0803 18:11:22.563204    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:11:22.574167    4630 logs.go:276] 1 containers: [342bc0852b67]
	I0803 18:11:22.574226    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:11:22.585287    4630 logs.go:276] 1 containers: [80d0b706be3f]
	I0803 18:11:22.585349    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:11:22.595909    4630 logs.go:276] 0 containers: []
	W0803 18:11:22.595920    4630 logs.go:278] No container was found matching "kindnet"
	I0803 18:11:22.595959    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:11:22.608941    4630 logs.go:276] 1 containers: [f6fd4b2b4472]
	I0803 18:11:22.608961    4630 logs.go:123] Gathering logs for dmesg ...
	I0803 18:11:22.608967    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:11:22.614250    4630 logs.go:123] Gathering logs for coredns [c2611cb1b266] ...
	I0803 18:11:22.614261    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2611cb1b266"
	I0803 18:11:22.626420    4630 logs.go:123] Gathering logs for etcd [983bbf4c86df] ...
	I0803 18:11:22.626432    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 983bbf4c86df"
	I0803 18:11:22.641334    4630 logs.go:123] Gathering logs for coredns [b424f14f4cfc] ...
	I0803 18:11:22.641349    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b424f14f4cfc"
	I0803 18:11:22.654892    4630 logs.go:123] Gathering logs for kube-scheduler [2cffae83c168] ...
	I0803 18:11:22.654902    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cffae83c168"
	I0803 18:11:22.673518    4630 logs.go:123] Gathering logs for kube-controller-manager [80d0b706be3f] ...
	I0803 18:11:22.673535    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d0b706be3f"
	I0803 18:11:22.697567    4630 logs.go:123] Gathering logs for kubelet ...
	I0803 18:11:22.697579    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 18:11:22.733907    4630 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:11:22.733928    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:11:22.805787    4630 logs.go:123] Gathering logs for Docker ...
	I0803 18:11:22.805799    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:11:22.830713    4630 logs.go:123] Gathering logs for container status ...
	I0803 18:11:22.830722    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:11:22.842539    4630 logs.go:123] Gathering logs for kube-apiserver [77262126dc60] ...
	I0803 18:11:22.842547    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77262126dc60"
	I0803 18:11:22.856911    4630 logs.go:123] Gathering logs for coredns [8722dbe9ce68] ...
	I0803 18:11:22.856920    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8722dbe9ce68"
	I0803 18:11:22.871670    4630 logs.go:123] Gathering logs for coredns [b84f35bdcbf8] ...
	I0803 18:11:22.871683    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b84f35bdcbf8"
	I0803 18:11:22.884523    4630 logs.go:123] Gathering logs for kube-proxy [342bc0852b67] ...
	I0803 18:11:22.884531    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 342bc0852b67"
	I0803 18:11:22.896163    4630 logs.go:123] Gathering logs for storage-provisioner [f6fd4b2b4472] ...
	I0803 18:11:22.896170    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6fd4b2b4472"
	I0803 18:11:25.413070    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:11:30.415368    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:11:30.415524    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:11:30.431806    4630 logs.go:276] 1 containers: [77262126dc60]
	I0803 18:11:30.431897    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:11:30.445916    4630 logs.go:276] 1 containers: [983bbf4c86df]
	I0803 18:11:30.445984    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:11:30.457921    4630 logs.go:276] 4 containers: [b424f14f4cfc 8722dbe9ce68 b84f35bdcbf8 c2611cb1b266]
	I0803 18:11:30.457994    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:11:30.470193    4630 logs.go:276] 1 containers: [2cffae83c168]
	I0803 18:11:30.470264    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:11:30.483650    4630 logs.go:276] 1 containers: [342bc0852b67]
	I0803 18:11:30.483722    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:11:30.496043    4630 logs.go:276] 1 containers: [80d0b706be3f]
	I0803 18:11:30.496117    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:11:30.507793    4630 logs.go:276] 0 containers: []
	W0803 18:11:30.507806    4630 logs.go:278] No container was found matching "kindnet"
	I0803 18:11:30.507865    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:11:30.519609    4630 logs.go:276] 1 containers: [f6fd4b2b4472]
	I0803 18:11:30.519632    4630 logs.go:123] Gathering logs for coredns [8722dbe9ce68] ...
	I0803 18:11:30.519639    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8722dbe9ce68"
	I0803 18:11:30.532850    4630 logs.go:123] Gathering logs for coredns [c2611cb1b266] ...
	I0803 18:11:30.532863    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2611cb1b266"
	I0803 18:11:30.546102    4630 logs.go:123] Gathering logs for storage-provisioner [f6fd4b2b4472] ...
	I0803 18:11:30.546114    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6fd4b2b4472"
	I0803 18:11:30.559361    4630 logs.go:123] Gathering logs for coredns [b84f35bdcbf8] ...
	I0803 18:11:30.559374    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b84f35bdcbf8"
	I0803 18:11:30.575803    4630 logs.go:123] Gathering logs for kube-controller-manager [80d0b706be3f] ...
	I0803 18:11:30.575813    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d0b706be3f"
	I0803 18:11:30.595760    4630 logs.go:123] Gathering logs for container status ...
	I0803 18:11:30.595770    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:11:30.608206    4630 logs.go:123] Gathering logs for kube-proxy [342bc0852b67] ...
	I0803 18:11:30.608216    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 342bc0852b67"
	I0803 18:11:30.619950    4630 logs.go:123] Gathering logs for Docker ...
	I0803 18:11:30.619961    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:11:30.644972    4630 logs.go:123] Gathering logs for kubelet ...
	I0803 18:11:30.644985    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 18:11:30.679209    4630 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:11:30.679217    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:11:30.715669    4630 logs.go:123] Gathering logs for coredns [b424f14f4cfc] ...
	I0803 18:11:30.715683    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b424f14f4cfc"
	I0803 18:11:30.727290    4630 logs.go:123] Gathering logs for kube-scheduler [2cffae83c168] ...
	I0803 18:11:30.727300    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cffae83c168"
	I0803 18:11:30.742290    4630 logs.go:123] Gathering logs for dmesg ...
	I0803 18:11:30.742300    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:11:30.746633    4630 logs.go:123] Gathering logs for kube-apiserver [77262126dc60] ...
	I0803 18:11:30.746640    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77262126dc60"
	I0803 18:11:30.763484    4630 logs.go:123] Gathering logs for etcd [983bbf4c86df] ...
	I0803 18:11:30.763498    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 983bbf4c86df"
	I0803 18:11:33.279725    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:11:38.281880    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:11:38.282332    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:11:38.318143    4630 logs.go:276] 1 containers: [77262126dc60]
	I0803 18:11:38.318267    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:11:38.338087    4630 logs.go:276] 1 containers: [983bbf4c86df]
	I0803 18:11:38.338163    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:11:38.352720    4630 logs.go:276] 4 containers: [b424f14f4cfc 8722dbe9ce68 b84f35bdcbf8 c2611cb1b266]
	I0803 18:11:38.352798    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:11:38.365188    4630 logs.go:276] 1 containers: [2cffae83c168]
	I0803 18:11:38.365260    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:11:38.375748    4630 logs.go:276] 1 containers: [342bc0852b67]
	I0803 18:11:38.375812    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:11:38.389391    4630 logs.go:276] 1 containers: [80d0b706be3f]
	I0803 18:11:38.389465    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:11:38.399718    4630 logs.go:276] 0 containers: []
	W0803 18:11:38.399731    4630 logs.go:278] No container was found matching "kindnet"
	I0803 18:11:38.399782    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:11:38.414148    4630 logs.go:276] 1 containers: [f6fd4b2b4472]
	I0803 18:11:38.414167    4630 logs.go:123] Gathering logs for storage-provisioner [f6fd4b2b4472] ...
	I0803 18:11:38.414172    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6fd4b2b4472"
	I0803 18:11:38.432243    4630 logs.go:123] Gathering logs for dmesg ...
	I0803 18:11:38.432255    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:11:38.436358    4630 logs.go:123] Gathering logs for coredns [b84f35bdcbf8] ...
	I0803 18:11:38.436366    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b84f35bdcbf8"
	I0803 18:11:38.448087    4630 logs.go:123] Gathering logs for coredns [b424f14f4cfc] ...
	I0803 18:11:38.448096    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b424f14f4cfc"
	I0803 18:11:38.459479    4630 logs.go:123] Gathering logs for coredns [8722dbe9ce68] ...
	I0803 18:11:38.459491    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8722dbe9ce68"
	I0803 18:11:38.474880    4630 logs.go:123] Gathering logs for kube-scheduler [2cffae83c168] ...
	I0803 18:11:38.474894    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cffae83c168"
	I0803 18:11:38.490028    4630 logs.go:123] Gathering logs for kube-controller-manager [80d0b706be3f] ...
	I0803 18:11:38.490037    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d0b706be3f"
	I0803 18:11:38.507693    4630 logs.go:123] Gathering logs for Docker ...
	I0803 18:11:38.507703    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:11:38.531967    4630 logs.go:123] Gathering logs for container status ...
	I0803 18:11:38.531975    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:11:38.543340    4630 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:11:38.543351    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:11:38.577765    4630 logs.go:123] Gathering logs for kube-apiserver [77262126dc60] ...
	I0803 18:11:38.577779    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77262126dc60"
	I0803 18:11:38.602074    4630 logs.go:123] Gathering logs for coredns [c2611cb1b266] ...
	I0803 18:11:38.602083    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2611cb1b266"
	I0803 18:11:38.617855    4630 logs.go:123] Gathering logs for kube-proxy [342bc0852b67] ...
	I0803 18:11:38.617866    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 342bc0852b67"
	I0803 18:11:38.631533    4630 logs.go:123] Gathering logs for kubelet ...
	I0803 18:11:38.631544    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 18:11:38.666692    4630 logs.go:123] Gathering logs for etcd [983bbf4c86df] ...
	I0803 18:11:38.666704    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 983bbf4c86df"
	I0803 18:11:41.182104    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:11:46.184650    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:11:46.184706    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:11:46.195850    4630 logs.go:276] 1 containers: [77262126dc60]
	I0803 18:11:46.195922    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:11:46.207714    4630 logs.go:276] 1 containers: [983bbf4c86df]
	I0803 18:11:46.207776    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:11:46.219979    4630 logs.go:276] 4 containers: [b424f14f4cfc 8722dbe9ce68 b84f35bdcbf8 c2611cb1b266]
	I0803 18:11:46.220030    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:11:46.230959    4630 logs.go:276] 1 containers: [2cffae83c168]
	I0803 18:11:46.231020    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:11:46.243345    4630 logs.go:276] 1 containers: [342bc0852b67]
	I0803 18:11:46.243399    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:11:46.255416    4630 logs.go:276] 1 containers: [80d0b706be3f]
	I0803 18:11:46.255482    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:11:46.267277    4630 logs.go:276] 0 containers: []
	W0803 18:11:46.267286    4630 logs.go:278] No container was found matching "kindnet"
	I0803 18:11:46.267323    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:11:46.277433    4630 logs.go:276] 1 containers: [f6fd4b2b4472]
	I0803 18:11:46.277448    4630 logs.go:123] Gathering logs for kubelet ...
	I0803 18:11:46.277453    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 18:11:46.312919    4630 logs.go:123] Gathering logs for kube-proxy [342bc0852b67] ...
	I0803 18:11:46.312933    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 342bc0852b67"
	I0803 18:11:46.325436    4630 logs.go:123] Gathering logs for kube-controller-manager [80d0b706be3f] ...
	I0803 18:11:46.325447    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d0b706be3f"
	I0803 18:11:46.344547    4630 logs.go:123] Gathering logs for storage-provisioner [f6fd4b2b4472] ...
	I0803 18:11:46.344567    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6fd4b2b4472"
	I0803 18:11:46.357561    4630 logs.go:123] Gathering logs for coredns [8722dbe9ce68] ...
	I0803 18:11:46.357572    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8722dbe9ce68"
	I0803 18:11:46.370893    4630 logs.go:123] Gathering logs for kube-apiserver [77262126dc60] ...
	I0803 18:11:46.370901    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77262126dc60"
	I0803 18:11:46.385600    4630 logs.go:123] Gathering logs for etcd [983bbf4c86df] ...
	I0803 18:11:46.385614    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 983bbf4c86df"
	I0803 18:11:46.399899    4630 logs.go:123] Gathering logs for kube-scheduler [2cffae83c168] ...
	I0803 18:11:46.399912    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cffae83c168"
	I0803 18:11:46.416263    4630 logs.go:123] Gathering logs for Docker ...
	I0803 18:11:46.416272    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:11:46.440969    4630 logs.go:123] Gathering logs for container status ...
	I0803 18:11:46.440984    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:11:46.455600    4630 logs.go:123] Gathering logs for dmesg ...
	I0803 18:11:46.455611    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:11:46.460131    4630 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:11:46.460138    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:11:46.498775    4630 logs.go:123] Gathering logs for coredns [b424f14f4cfc] ...
	I0803 18:11:46.498784    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b424f14f4cfc"
	I0803 18:11:46.516760    4630 logs.go:123] Gathering logs for coredns [b84f35bdcbf8] ...
	I0803 18:11:46.516772    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b84f35bdcbf8"
	I0803 18:11:46.530092    4630 logs.go:123] Gathering logs for coredns [c2611cb1b266] ...
	I0803 18:11:46.530103    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2611cb1b266"
	I0803 18:11:49.047549    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:11:54.050306    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:11:54.050717    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:11:54.092272    4630 logs.go:276] 1 containers: [77262126dc60]
	I0803 18:11:54.092419    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:11:54.114203    4630 logs.go:276] 1 containers: [983bbf4c86df]
	I0803 18:11:54.114306    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:11:54.129812    4630 logs.go:276] 4 containers: [b424f14f4cfc 8722dbe9ce68 b84f35bdcbf8 c2611cb1b266]
	I0803 18:11:54.129884    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:11:54.142373    4630 logs.go:276] 1 containers: [2cffae83c168]
	I0803 18:11:54.142430    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:11:54.153074    4630 logs.go:276] 1 containers: [342bc0852b67]
	I0803 18:11:54.153146    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:11:54.163853    4630 logs.go:276] 1 containers: [80d0b706be3f]
	I0803 18:11:54.163916    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:11:54.175307    4630 logs.go:276] 0 containers: []
	W0803 18:11:54.175320    4630 logs.go:278] No container was found matching "kindnet"
	I0803 18:11:54.175377    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:11:54.189003    4630 logs.go:276] 1 containers: [f6fd4b2b4472]
	I0803 18:11:54.189028    4630 logs.go:123] Gathering logs for kubelet ...
	I0803 18:11:54.189033    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 18:11:54.223322    4630 logs.go:123] Gathering logs for kube-apiserver [77262126dc60] ...
	I0803 18:11:54.223330    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77262126dc60"
	I0803 18:11:54.240205    4630 logs.go:123] Gathering logs for coredns [8722dbe9ce68] ...
	I0803 18:11:54.240218    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8722dbe9ce68"
	I0803 18:11:54.251903    4630 logs.go:123] Gathering logs for kube-proxy [342bc0852b67] ...
	I0803 18:11:54.251914    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 342bc0852b67"
	I0803 18:11:54.263320    4630 logs.go:123] Gathering logs for coredns [b424f14f4cfc] ...
	I0803 18:11:54.263333    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b424f14f4cfc"
	I0803 18:11:54.280111    4630 logs.go:123] Gathering logs for kube-controller-manager [80d0b706be3f] ...
	I0803 18:11:54.280124    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d0b706be3f"
	I0803 18:11:54.299487    4630 logs.go:123] Gathering logs for storage-provisioner [f6fd4b2b4472] ...
	I0803 18:11:54.299498    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6fd4b2b4472"
	I0803 18:11:54.310902    4630 logs.go:123] Gathering logs for dmesg ...
	I0803 18:11:54.310912    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:11:54.314987    4630 logs.go:123] Gathering logs for etcd [983bbf4c86df] ...
	I0803 18:11:54.314993    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 983bbf4c86df"
	I0803 18:11:54.329131    4630 logs.go:123] Gathering logs for coredns [b84f35bdcbf8] ...
	I0803 18:11:54.329142    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b84f35bdcbf8"
	I0803 18:11:54.341303    4630 logs.go:123] Gathering logs for coredns [c2611cb1b266] ...
	I0803 18:11:54.341317    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2611cb1b266"
	I0803 18:11:54.353300    4630 logs.go:123] Gathering logs for Docker ...
	I0803 18:11:54.353313    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:11:54.376564    4630 logs.go:123] Gathering logs for container status ...
	I0803 18:11:54.376573    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:11:54.387627    4630 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:11:54.387641    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:11:54.421206    4630 logs.go:123] Gathering logs for kube-scheduler [2cffae83c168] ...
	I0803 18:11:54.421215    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cffae83c168"
	I0803 18:11:56.938346    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:12:01.941075    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:12:01.941447    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 18:12:01.972492    4630 logs.go:276] 1 containers: [77262126dc60]
	I0803 18:12:01.972613    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 18:12:01.991133    4630 logs.go:276] 1 containers: [983bbf4c86df]
	I0803 18:12:01.991217    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 18:12:02.005057    4630 logs.go:276] 4 containers: [b424f14f4cfc 8722dbe9ce68 b84f35bdcbf8 c2611cb1b266]
	I0803 18:12:02.005124    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 18:12:02.016776    4630 logs.go:276] 1 containers: [2cffae83c168]
	I0803 18:12:02.016840    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 18:12:02.028190    4630 logs.go:276] 1 containers: [342bc0852b67]
	I0803 18:12:02.028256    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 18:12:02.038925    4630 logs.go:276] 1 containers: [80d0b706be3f]
	I0803 18:12:02.038989    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 18:12:02.049207    4630 logs.go:276] 0 containers: []
	W0803 18:12:02.049224    4630 logs.go:278] No container was found matching "kindnet"
	I0803 18:12:02.049272    4630 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 18:12:02.060386    4630 logs.go:276] 1 containers: [f6fd4b2b4472]
	I0803 18:12:02.060402    4630 logs.go:123] Gathering logs for dmesg ...
	I0803 18:12:02.060407    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 18:12:02.065267    4630 logs.go:123] Gathering logs for kube-apiserver [77262126dc60] ...
	I0803 18:12:02.065276    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77262126dc60"
	I0803 18:12:02.079317    4630 logs.go:123] Gathering logs for coredns [b424f14f4cfc] ...
	I0803 18:12:02.079329    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b424f14f4cfc"
	I0803 18:12:02.091399    4630 logs.go:123] Gathering logs for kube-scheduler [2cffae83c168] ...
	I0803 18:12:02.091412    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cffae83c168"
	I0803 18:12:02.106378    4630 logs.go:123] Gathering logs for kube-controller-manager [80d0b706be3f] ...
	I0803 18:12:02.106388    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d0b706be3f"
	I0803 18:12:02.123551    4630 logs.go:123] Gathering logs for container status ...
	I0803 18:12:02.123561    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 18:12:02.135867    4630 logs.go:123] Gathering logs for coredns [b84f35bdcbf8] ...
	I0803 18:12:02.135879    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b84f35bdcbf8"
	I0803 18:12:02.147749    4630 logs.go:123] Gathering logs for Docker ...
	I0803 18:12:02.147762    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 18:12:02.170166    4630 logs.go:123] Gathering logs for kubelet ...
	I0803 18:12:02.170175    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 18:12:02.203694    4630 logs.go:123] Gathering logs for coredns [8722dbe9ce68] ...
	I0803 18:12:02.203703    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8722dbe9ce68"
	I0803 18:12:02.214889    4630 logs.go:123] Gathering logs for coredns [c2611cb1b266] ...
	I0803 18:12:02.214902    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2611cb1b266"
	I0803 18:12:02.227991    4630 logs.go:123] Gathering logs for kube-proxy [342bc0852b67] ...
	I0803 18:12:02.228001    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 342bc0852b67"
	I0803 18:12:02.246397    4630 logs.go:123] Gathering logs for storage-provisioner [f6fd4b2b4472] ...
	I0803 18:12:02.246408    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6fd4b2b4472"
	I0803 18:12:02.258410    4630 logs.go:123] Gathering logs for describe nodes ...
	I0803 18:12:02.258421    4630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 18:12:02.292461    4630 logs.go:123] Gathering logs for etcd [983bbf4c86df] ...
	I0803 18:12:02.292472    4630 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 983bbf4c86df"
	I0803 18:12:04.809800    4630 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 18:12:09.812354    4630 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 18:12:09.815698    4630 out.go:177] 
	W0803 18:12:09.819650    4630 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0803 18:12:09.819658    4630 out.go:239] * 
	* 
	W0803 18:12:09.820057    4630 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 18:12:09.839443    4630 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-413000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (573.80s)

                                                
                                    
x
+
TestPause/serial/Start (10.01s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-942000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-942000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.942923083s)

                                                
                                                
-- stdout --
	* [pause-942000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-942000" primary control-plane node in "pause-942000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-942000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-942000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-942000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-942000 -n pause-942000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-942000 -n pause-942000: exit status 7 (67.95725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-942000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (10.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.93s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-562000 --driver=qemu2 
E0803 18:09:26.625044    1673 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/functional-959000/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-562000 --driver=qemu2 : exit status 80 (9.886437084s)

                                                
                                                
-- stdout --
	* [NoKubernetes-562000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-562000" primary control-plane node in "NoKubernetes-562000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-562000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-562000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-562000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-562000 -n NoKubernetes-562000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-562000 -n NoKubernetes-562000: exit status 7 (47.190916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-562000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-562000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-562000 --no-kubernetes --driver=qemu2 : exit status 80 (5.239114291s)

                                                
                                                
-- stdout --
	* [NoKubernetes-562000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-562000
	* Restarting existing qemu2 VM for "NoKubernetes-562000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-562000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-562000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-562000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-562000 -n NoKubernetes-562000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-562000 -n NoKubernetes-562000: exit status 7 (48.344ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-562000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-562000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-562000 --no-kubernetes --driver=qemu2 : exit status 80 (5.25639975s)

                                                
                                                
-- stdout --
	* [NoKubernetes-562000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-562000
	* Restarting existing qemu2 VM for "NoKubernetes-562000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-562000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-562000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-562000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-562000 -n NoKubernetes-562000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-562000 -n NoKubernetes-562000: exit status 7 (65.429ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-562000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-562000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-562000 --driver=qemu2 : exit status 80 (5.238842333s)

                                                
                                                
-- stdout --
	* [NoKubernetes-562000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-562000
	* Restarting existing qemu2 VM for "NoKubernetes-562000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-562000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-562000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-562000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-562000 -n NoKubernetes-562000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-562000 -n NoKubernetes-562000: exit status 7 (59.138375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-562000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-289000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-289000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.957167167s)

                                                
                                                
-- stdout --
	* [auto-289000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-289000" primary control-plane node in "auto-289000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-289000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 18:10:22.160291    4859 out.go:291] Setting OutFile to fd 1 ...
	I0803 18:10:22.160450    4859 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:10:22.160454    4859 out.go:304] Setting ErrFile to fd 2...
	I0803 18:10:22.160456    4859 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:10:22.160577    4859 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1166/.minikube/bin
	I0803 18:10:22.161694    4859 out.go:298] Setting JSON to false
	I0803 18:10:22.178163    4859 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4186,"bootTime":1722729636,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 18:10:22.178224    4859 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 18:10:22.183895    4859 out.go:177] * [auto-289000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0803 18:10:22.192138    4859 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 18:10:22.192245    4859 notify.go:220] Checking for updates...
	I0803 18:10:22.198044    4859 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	I0803 18:10:22.205037    4859 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 18:10:22.208056    4859 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 18:10:22.211062    4859 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	I0803 18:10:22.214064    4859 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 18:10:22.217319    4859 config.go:182] Loaded profile config "multinode-483000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 18:10:22.217383    4859 config.go:182] Loaded profile config "stopped-upgrade-413000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0803 18:10:22.217444    4859 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 18:10:22.221032    4859 out.go:177] * Using the qemu2 driver based on user configuration
	I0803 18:10:22.227999    4859 start.go:297] selected driver: qemu2
	I0803 18:10:22.228004    4859 start.go:901] validating driver "qemu2" against <nil>
	I0803 18:10:22.228009    4859 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 18:10:22.230211    4859 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0803 18:10:22.232996    4859 out.go:177] * Automatically selected the socket_vmnet network
	I0803 18:10:22.234106    4859 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0803 18:10:22.234128    4859 cni.go:84] Creating CNI manager for ""
	I0803 18:10:22.234134    4859 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0803 18:10:22.234140    4859 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0803 18:10:22.234170    4859 start.go:340] cluster config:
	{Name:auto-289000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:auto-289000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 18:10:22.237785    4859 iso.go:125] acquiring lock: {Name:mk5fb439c48468a4ea1254631a9a137d32075713 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 18:10:22.241067    4859 out.go:177] * Starting "auto-289000" primary control-plane node in "auto-289000" cluster
	I0803 18:10:22.248999    4859 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0803 18:10:22.249012    4859 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0803 18:10:22.249021    4859 cache.go:56] Caching tarball of preloaded images
	I0803 18:10:22.249072    4859 preload.go:172] Found /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0803 18:10:22.249077    4859 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0803 18:10:22.249132    4859 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/auto-289000/config.json ...
	I0803 18:10:22.249142    4859 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/auto-289000/config.json: {Name:mk824de7b373d70a2e373eb5ed6073f0a95ed7aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 18:10:22.249437    4859 start.go:360] acquireMachinesLock for auto-289000: {Name:mk8118ba0191912f8ddd157ebc629e1eb9051406 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 18:10:22.249467    4859 start.go:364] duration metric: took 25.25µs to acquireMachinesLock for "auto-289000"
	I0803 18:10:22.249476    4859 start.go:93] Provisioning new machine with config: &{Name:auto-289000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:auto-289000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 18:10:22.249500    4859 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 18:10:22.257039    4859 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0803 18:10:22.272170    4859 start.go:159] libmachine.API.Create for "auto-289000" (driver="qemu2")
	I0803 18:10:22.272200    4859 client.go:168] LocalClient.Create starting
	I0803 18:10:22.272278    4859 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/ca.pem
	I0803 18:10:22.272315    4859 main.go:141] libmachine: Decoding PEM data...
	I0803 18:10:22.272329    4859 main.go:141] libmachine: Parsing certificate...
	I0803 18:10:22.272370    4859 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/cert.pem
	I0803 18:10:22.272393    4859 main.go:141] libmachine: Decoding PEM data...
	I0803 18:10:22.272405    4859 main.go:141] libmachine: Parsing certificate...
	I0803 18:10:22.272830    4859 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1166/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 18:10:22.424941    4859 main.go:141] libmachine: Creating SSH key...
	I0803 18:10:22.590398    4859 main.go:141] libmachine: Creating Disk image...
	I0803 18:10:22.590405    4859 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 18:10:22.590627    4859 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/auto-289000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/auto-289000/disk.qcow2
	I0803 18:10:22.600451    4859 main.go:141] libmachine: STDOUT: 
	I0803 18:10:22.600470    4859 main.go:141] libmachine: STDERR: 
	I0803 18:10:22.600529    4859 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/auto-289000/disk.qcow2 +20000M
	I0803 18:10:22.608514    4859 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 18:10:22.608529    4859 main.go:141] libmachine: STDERR: 
	I0803 18:10:22.608546    4859 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/auto-289000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/auto-289000/disk.qcow2
	I0803 18:10:22.608549    4859 main.go:141] libmachine: Starting QEMU VM...
	I0803 18:10:22.608560    4859 qemu.go:418] Using hvf for hardware acceleration
	I0803 18:10:22.608585    4859 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/auto-289000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/auto-289000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/auto-289000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:13:f8:4e:14:f8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/auto-289000/disk.qcow2
	I0803 18:10:22.610248    4859 main.go:141] libmachine: STDOUT: 
	I0803 18:10:22.610261    4859 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 18:10:22.610280    4859 client.go:171] duration metric: took 338.086542ms to LocalClient.Create
	I0803 18:10:24.612534    4859 start.go:128] duration metric: took 2.3630675s to createHost
	I0803 18:10:24.612612    4859 start.go:83] releasing machines lock for "auto-289000", held for 2.363202166s
	W0803 18:10:24.612747    4859 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 18:10:24.627122    4859 out.go:177] * Deleting "auto-289000" in qemu2 ...
	W0803 18:10:24.655849    4859 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 18:10:24.655881    4859 start.go:729] Will try again in 5 seconds ...
	I0803 18:10:29.658056    4859 start.go:360] acquireMachinesLock for auto-289000: {Name:mk8118ba0191912f8ddd157ebc629e1eb9051406 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 18:10:29.658698    4859 start.go:364] duration metric: took 497µs to acquireMachinesLock for "auto-289000"
	I0803 18:10:29.658780    4859 start.go:93] Provisioning new machine with config: &{Name:auto-289000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:auto-289000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 18:10:29.659112    4859 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 18:10:29.669688    4859 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0803 18:10:29.717032    4859 start.go:159] libmachine.API.Create for "auto-289000" (driver="qemu2")
	I0803 18:10:29.717087    4859 client.go:168] LocalClient.Create starting
	I0803 18:10:29.717188    4859 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/ca.pem
	I0803 18:10:29.717247    4859 main.go:141] libmachine: Decoding PEM data...
	I0803 18:10:29.717263    4859 main.go:141] libmachine: Parsing certificate...
	I0803 18:10:29.717321    4859 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/cert.pem
	I0803 18:10:29.717360    4859 main.go:141] libmachine: Decoding PEM data...
	I0803 18:10:29.717371    4859 main.go:141] libmachine: Parsing certificate...
	I0803 18:10:29.717957    4859 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1166/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 18:10:29.878616    4859 main.go:141] libmachine: Creating SSH key...
	I0803 18:10:30.020025    4859 main.go:141] libmachine: Creating Disk image...
	I0803 18:10:30.020040    4859 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 18:10:30.020233    4859 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/auto-289000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/auto-289000/disk.qcow2
	I0803 18:10:30.029882    4859 main.go:141] libmachine: STDOUT: 
	I0803 18:10:30.029899    4859 main.go:141] libmachine: STDERR: 
	I0803 18:10:30.029939    4859 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/auto-289000/disk.qcow2 +20000M
	I0803 18:10:30.037980    4859 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 18:10:30.037996    4859 main.go:141] libmachine: STDERR: 
	I0803 18:10:30.038012    4859 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/auto-289000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/auto-289000/disk.qcow2
	I0803 18:10:30.038019    4859 main.go:141] libmachine: Starting QEMU VM...
	I0803 18:10:30.038031    4859 qemu.go:418] Using hvf for hardware acceleration
	I0803 18:10:30.038070    4859 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/auto-289000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/auto-289000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/auto-289000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:11:75:a1:e4:15 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/auto-289000/disk.qcow2
	I0803 18:10:30.039881    4859 main.go:141] libmachine: STDOUT: 
	I0803 18:10:30.039897    4859 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 18:10:30.039910    4859 client.go:171] duration metric: took 322.82575ms to LocalClient.Create
	I0803 18:10:32.042070    4859 start.go:128] duration metric: took 2.382974834s to createHost
	I0803 18:10:32.042143    4859 start.go:83] releasing machines lock for "auto-289000", held for 2.383489125s
	W0803 18:10:32.042515    4859 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-289000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-289000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 18:10:32.056135    4859 out.go:177] 
	W0803 18:10:32.060244    4859 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 18:10:32.060390    4859 out.go:239] * 
	* 
	W0803 18:10:32.062839    4859 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 18:10:32.075141    4859 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-289000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-289000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.784608792s)

                                                
                                                
-- stdout --
	* [calico-289000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-289000" primary control-plane node in "calico-289000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-289000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 18:10:34.236524    4969 out.go:291] Setting OutFile to fd 1 ...
	I0803 18:10:34.236683    4969 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:10:34.236688    4969 out.go:304] Setting ErrFile to fd 2...
	I0803 18:10:34.236691    4969 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:10:34.236819    4969 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1166/.minikube/bin
	I0803 18:10:34.237907    4969 out.go:298] Setting JSON to false
	I0803 18:10:34.254541    4969 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4198,"bootTime":1722729636,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 18:10:34.254617    4969 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 18:10:34.259346    4969 out.go:177] * [calico-289000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0803 18:10:34.268145    4969 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 18:10:34.268213    4969 notify.go:220] Checking for updates...
	I0803 18:10:34.275070    4969 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	I0803 18:10:34.278171    4969 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 18:10:34.281115    4969 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 18:10:34.284147    4969 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	I0803 18:10:34.287147    4969 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 18:10:34.290361    4969 config.go:182] Loaded profile config "multinode-483000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 18:10:34.290426    4969 config.go:182] Loaded profile config "stopped-upgrade-413000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0803 18:10:34.290465    4969 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 18:10:34.294086    4969 out.go:177] * Using the qemu2 driver based on user configuration
	I0803 18:10:34.300044    4969 start.go:297] selected driver: qemu2
	I0803 18:10:34.300049    4969 start.go:901] validating driver "qemu2" against <nil>
	I0803 18:10:34.300054    4969 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 18:10:34.302225    4969 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0803 18:10:34.305111    4969 out.go:177] * Automatically selected the socket_vmnet network
	I0803 18:10:34.308175    4969 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0803 18:10:34.308200    4969 cni.go:84] Creating CNI manager for "calico"
	I0803 18:10:34.308204    4969 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0803 18:10:34.308233    4969 start.go:340] cluster config:
	{Name:calico-289000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:calico-289000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 18:10:34.311590    4969 iso.go:125] acquiring lock: {Name:mk5fb439c48468a4ea1254631a9a137d32075713 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 18:10:34.319125    4969 out.go:177] * Starting "calico-289000" primary control-plane node in "calico-289000" cluster
	I0803 18:10:34.323110    4969 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0803 18:10:34.323124    4969 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0803 18:10:34.323131    4969 cache.go:56] Caching tarball of preloaded images
	I0803 18:10:34.323182    4969 preload.go:172] Found /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0803 18:10:34.323187    4969 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0803 18:10:34.323234    4969 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/calico-289000/config.json ...
	I0803 18:10:34.323246    4969 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/calico-289000/config.json: {Name:mkf66803b2e3f2a64232f2afe9b36d96a9040d04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 18:10:34.323561    4969 start.go:360] acquireMachinesLock for calico-289000: {Name:mk8118ba0191912f8ddd157ebc629e1eb9051406 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 18:10:34.323592    4969 start.go:364] duration metric: took 26.208µs to acquireMachinesLock for "calico-289000"
	I0803 18:10:34.323601    4969 start.go:93] Provisioning new machine with config: &{Name:calico-289000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:calico-289000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 18:10:34.323651    4969 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 18:10:34.331120    4969 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0803 18:10:34.347940    4969 start.go:159] libmachine.API.Create for "calico-289000" (driver="qemu2")
	I0803 18:10:34.347967    4969 client.go:168] LocalClient.Create starting
	I0803 18:10:34.348037    4969 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/ca.pem
	I0803 18:10:34.348070    4969 main.go:141] libmachine: Decoding PEM data...
	I0803 18:10:34.348079    4969 main.go:141] libmachine: Parsing certificate...
	I0803 18:10:34.348127    4969 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/cert.pem
	I0803 18:10:34.348151    4969 main.go:141] libmachine: Decoding PEM data...
	I0803 18:10:34.348158    4969 main.go:141] libmachine: Parsing certificate...
	I0803 18:10:34.348551    4969 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1166/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 18:10:34.502427    4969 main.go:141] libmachine: Creating SSH key...
	I0803 18:10:34.592917    4969 main.go:141] libmachine: Creating Disk image...
	I0803 18:10:34.592924    4969 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 18:10:34.593098    4969 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/calico-289000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/calico-289000/disk.qcow2
	I0803 18:10:34.602407    4969 main.go:141] libmachine: STDOUT: 
	I0803 18:10:34.602447    4969 main.go:141] libmachine: STDERR: 
	I0803 18:10:34.602500    4969 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/calico-289000/disk.qcow2 +20000M
	I0803 18:10:34.610438    4969 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 18:10:34.610453    4969 main.go:141] libmachine: STDERR: 
	I0803 18:10:34.610467    4969 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/calico-289000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/calico-289000/disk.qcow2
	I0803 18:10:34.610478    4969 main.go:141] libmachine: Starting QEMU VM...
	I0803 18:10:34.610490    4969 qemu.go:418] Using hvf for hardware acceleration
	I0803 18:10:34.610514    4969 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/calico-289000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/calico-289000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/calico-289000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:60:03:35:e7:40 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/calico-289000/disk.qcow2
	I0803 18:10:34.612142    4969 main.go:141] libmachine: STDOUT: 
	I0803 18:10:34.612159    4969 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 18:10:34.612176    4969 client.go:171] duration metric: took 264.212416ms to LocalClient.Create
	I0803 18:10:36.614449    4969 start.go:128] duration metric: took 2.290801292s to createHost
	I0803 18:10:36.614587    4969 start.go:83] releasing machines lock for "calico-289000", held for 2.291050334s
	W0803 18:10:36.614668    4969 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 18:10:36.636161    4969 out.go:177] * Deleting "calico-289000" in qemu2 ...
	W0803 18:10:36.665081    4969 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 18:10:36.665204    4969 start.go:729] Will try again in 5 seconds ...
	I0803 18:10:41.667310    4969 start.go:360] acquireMachinesLock for calico-289000: {Name:mk8118ba0191912f8ddd157ebc629e1eb9051406 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 18:10:41.667870    4969 start.go:364] duration metric: took 433.041µs to acquireMachinesLock for "calico-289000"
	I0803 18:10:41.668010    4969 start.go:93] Provisioning new machine with config: &{Name:calico-289000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:calico-289000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 18:10:41.668292    4969 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 18:10:41.677840    4969 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0803 18:10:41.721034    4969 start.go:159] libmachine.API.Create for "calico-289000" (driver="qemu2")
	I0803 18:10:41.721085    4969 client.go:168] LocalClient.Create starting
	I0803 18:10:41.721208    4969 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/ca.pem
	I0803 18:10:41.721269    4969 main.go:141] libmachine: Decoding PEM data...
	I0803 18:10:41.721285    4969 main.go:141] libmachine: Parsing certificate...
	I0803 18:10:41.721335    4969 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/cert.pem
	I0803 18:10:41.721375    4969 main.go:141] libmachine: Decoding PEM data...
	I0803 18:10:41.721384    4969 main.go:141] libmachine: Parsing certificate...
	I0803 18:10:41.721904    4969 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1166/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 18:10:41.880798    4969 main.go:141] libmachine: Creating SSH key...
	I0803 18:10:41.930309    4969 main.go:141] libmachine: Creating Disk image...
	I0803 18:10:41.930314    4969 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 18:10:41.930493    4969 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/calico-289000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/calico-289000/disk.qcow2
	I0803 18:10:41.940035    4969 main.go:141] libmachine: STDOUT: 
	I0803 18:10:41.940053    4969 main.go:141] libmachine: STDERR: 
	I0803 18:10:41.940101    4969 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/calico-289000/disk.qcow2 +20000M
	I0803 18:10:41.948023    4969 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 18:10:41.948041    4969 main.go:141] libmachine: STDERR: 
	I0803 18:10:41.948052    4969 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/calico-289000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/calico-289000/disk.qcow2
	I0803 18:10:41.948056    4969 main.go:141] libmachine: Starting QEMU VM...
	I0803 18:10:41.948068    4969 qemu.go:418] Using hvf for hardware acceleration
	I0803 18:10:41.948091    4969 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/calico-289000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/calico-289000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/calico-289000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:23:ed:55:1a:53 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/calico-289000/disk.qcow2
	I0803 18:10:41.949754    4969 main.go:141] libmachine: STDOUT: 
	I0803 18:10:41.949769    4969 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 18:10:41.949783    4969 client.go:171] duration metric: took 228.699458ms to LocalClient.Create
	I0803 18:10:43.951952    4969 start.go:128] duration metric: took 2.28367975s to createHost
	I0803 18:10:43.952058    4969 start.go:83] releasing machines lock for "calico-289000", held for 2.284228083s
	W0803 18:10:43.952349    4969 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-289000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-289000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 18:10:43.962099    4969 out.go:177] 
	W0803 18:10:43.969137    4969 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 18:10:43.969197    4969 out.go:239] * 
	* 
	W0803 18:10:43.972043    4969 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 18:10:43.979051    4969 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-289000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-289000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.802735667s)

                                                
                                                
-- stdout --
	* [custom-flannel-289000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-289000" primary control-plane node in "custom-flannel-289000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-289000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 18:10:46.332940    5090 out.go:291] Setting OutFile to fd 1 ...
	I0803 18:10:46.333094    5090 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:10:46.333097    5090 out.go:304] Setting ErrFile to fd 2...
	I0803 18:10:46.333099    5090 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:10:46.333243    5090 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1166/.minikube/bin
	I0803 18:10:46.334341    5090 out.go:298] Setting JSON to false
	I0803 18:10:46.350759    5090 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4210,"bootTime":1722729636,"procs":456,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 18:10:46.350830    5090 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 18:10:46.355088    5090 out.go:177] * [custom-flannel-289000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0803 18:10:46.363607    5090 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 18:10:46.363670    5090 notify.go:220] Checking for updates...
	I0803 18:10:46.371472    5090 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	I0803 18:10:46.374533    5090 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 18:10:46.377417    5090 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 18:10:46.380509    5090 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	I0803 18:10:46.383515    5090 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 18:10:46.385125    5090 config.go:182] Loaded profile config "multinode-483000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 18:10:46.385186    5090 config.go:182] Loaded profile config "stopped-upgrade-413000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0803 18:10:46.385230    5090 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 18:10:46.389479    5090 out.go:177] * Using the qemu2 driver based on user configuration
	I0803 18:10:46.396376    5090 start.go:297] selected driver: qemu2
	I0803 18:10:46.396382    5090 start.go:901] validating driver "qemu2" against <nil>
	I0803 18:10:46.396388    5090 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 18:10:46.398667    5090 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0803 18:10:46.401515    5090 out.go:177] * Automatically selected the socket_vmnet network
	I0803 18:10:46.404573    5090 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0803 18:10:46.404588    5090 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0803 18:10:46.404599    5090 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0803 18:10:46.404628    5090 start.go:340] cluster config:
	{Name:custom-flannel-289000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-289000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 18:10:46.408028    5090 iso.go:125] acquiring lock: {Name:mk5fb439c48468a4ea1254631a9a137d32075713 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 18:10:46.415472    5090 out.go:177] * Starting "custom-flannel-289000" primary control-plane node in "custom-flannel-289000" cluster
	I0803 18:10:46.419511    5090 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0803 18:10:46.419539    5090 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0803 18:10:46.419548    5090 cache.go:56] Caching tarball of preloaded images
	I0803 18:10:46.419625    5090 preload.go:172] Found /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0803 18:10:46.419633    5090 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0803 18:10:46.419687    5090 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/custom-flannel-289000/config.json ...
	I0803 18:10:46.419698    5090 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/custom-flannel-289000/config.json: {Name:mka3bb214f9ab1efd8f2d717711217b468094c10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 18:10:46.420036    5090 start.go:360] acquireMachinesLock for custom-flannel-289000: {Name:mk8118ba0191912f8ddd157ebc629e1eb9051406 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 18:10:46.420070    5090 start.go:364] duration metric: took 25.958µs to acquireMachinesLock for "custom-flannel-289000"
	I0803 18:10:46.420082    5090 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-289000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-289000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 18:10:46.420116    5090 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 18:10:46.427520    5090 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0803 18:10:46.442735    5090 start.go:159] libmachine.API.Create for "custom-flannel-289000" (driver="qemu2")
	I0803 18:10:46.442757    5090 client.go:168] LocalClient.Create starting
	I0803 18:10:46.442823    5090 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/ca.pem
	I0803 18:10:46.442856    5090 main.go:141] libmachine: Decoding PEM data...
	I0803 18:10:46.442865    5090 main.go:141] libmachine: Parsing certificate...
	I0803 18:10:46.442901    5090 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/cert.pem
	I0803 18:10:46.442924    5090 main.go:141] libmachine: Decoding PEM data...
	I0803 18:10:46.442931    5090 main.go:141] libmachine: Parsing certificate...
	I0803 18:10:46.443375    5090 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1166/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 18:10:46.593422    5090 main.go:141] libmachine: Creating SSH key...
	I0803 18:10:46.653642    5090 main.go:141] libmachine: Creating Disk image...
	I0803 18:10:46.653647    5090 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 18:10:46.653839    5090 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/custom-flannel-289000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/custom-flannel-289000/disk.qcow2
	I0803 18:10:46.663042    5090 main.go:141] libmachine: STDOUT: 
	I0803 18:10:46.663062    5090 main.go:141] libmachine: STDERR: 
	I0803 18:10:46.663121    5090 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/custom-flannel-289000/disk.qcow2 +20000M
	I0803 18:10:46.671696    5090 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 18:10:46.671714    5090 main.go:141] libmachine: STDERR: 
	I0803 18:10:46.671727    5090 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/custom-flannel-289000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/custom-flannel-289000/disk.qcow2
	I0803 18:10:46.671736    5090 main.go:141] libmachine: Starting QEMU VM...
	I0803 18:10:46.671751    5090 qemu.go:418] Using hvf for hardware acceleration
	I0803 18:10:46.671780    5090 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/custom-flannel-289000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/custom-flannel-289000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/custom-flannel-289000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:80:4f:74:1b:82 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/custom-flannel-289000/disk.qcow2
	I0803 18:10:46.673575    5090 main.go:141] libmachine: STDOUT: 
	I0803 18:10:46.673591    5090 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 18:10:46.673610    5090 client.go:171] duration metric: took 230.855042ms to LocalClient.Create
	I0803 18:10:48.675663    5090 start.go:128] duration metric: took 2.255597625s to createHost
	I0803 18:10:48.675707    5090 start.go:83] releasing machines lock for "custom-flannel-289000", held for 2.255695s
	W0803 18:10:48.675766    5090 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 18:10:48.680791    5090 out.go:177] * Deleting "custom-flannel-289000" in qemu2 ...
	W0803 18:10:48.698309    5090 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 18:10:48.698320    5090 start.go:729] Will try again in 5 seconds ...
	I0803 18:10:53.700177    5090 start.go:360] acquireMachinesLock for custom-flannel-289000: {Name:mk8118ba0191912f8ddd157ebc629e1eb9051406 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 18:10:53.700321    5090 start.go:364] duration metric: took 114.625µs to acquireMachinesLock for "custom-flannel-289000"
	I0803 18:10:53.700356    5090 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-289000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-289000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 18:10:53.700405    5090 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 18:10:53.709580    5090 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0803 18:10:53.725342    5090 start.go:159] libmachine.API.Create for "custom-flannel-289000" (driver="qemu2")
	I0803 18:10:53.725367    5090 client.go:168] LocalClient.Create starting
	I0803 18:10:53.725439    5090 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/ca.pem
	I0803 18:10:53.725478    5090 main.go:141] libmachine: Decoding PEM data...
	I0803 18:10:53.725488    5090 main.go:141] libmachine: Parsing certificate...
	I0803 18:10:53.725518    5090 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/cert.pem
	I0803 18:10:53.725544    5090 main.go:141] libmachine: Decoding PEM data...
	I0803 18:10:53.725557    5090 main.go:141] libmachine: Parsing certificate...
	I0803 18:10:53.725830    5090 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1166/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 18:10:53.878294    5090 main.go:141] libmachine: Creating SSH key...
	I0803 18:10:54.044226    5090 main.go:141] libmachine: Creating Disk image...
	I0803 18:10:54.044233    5090 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 18:10:54.044455    5090 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/custom-flannel-289000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/custom-flannel-289000/disk.qcow2
	I0803 18:10:54.054020    5090 main.go:141] libmachine: STDOUT: 
	I0803 18:10:54.054040    5090 main.go:141] libmachine: STDERR: 
	I0803 18:10:54.054088    5090 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/custom-flannel-289000/disk.qcow2 +20000M
	I0803 18:10:54.062047    5090 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 18:10:54.062064    5090 main.go:141] libmachine: STDERR: 
	I0803 18:10:54.062076    5090 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/custom-flannel-289000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/custom-flannel-289000/disk.qcow2
	I0803 18:10:54.062081    5090 main.go:141] libmachine: Starting QEMU VM...
	I0803 18:10:54.062096    5090 qemu.go:418] Using hvf for hardware acceleration
	I0803 18:10:54.062123    5090 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/custom-flannel-289000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/custom-flannel-289000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/custom-flannel-289000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:86:23:3f:1a:2f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/custom-flannel-289000/disk.qcow2
	I0803 18:10:54.063821    5090 main.go:141] libmachine: STDOUT: 
	I0803 18:10:54.063836    5090 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 18:10:54.063850    5090 client.go:171] duration metric: took 338.490125ms to LocalClient.Create
	I0803 18:10:56.066013    5090 start.go:128] duration metric: took 2.365643417s to createHost
	I0803 18:10:56.066094    5090 start.go:83] releasing machines lock for "custom-flannel-289000", held for 2.365824917s
	W0803 18:10:56.066546    5090 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-289000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-289000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 18:10:56.077170    5090 out.go:177] 
	W0803 18:10:56.083284    5090 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 18:10:56.083311    5090 out.go:239] * 
	* 
	W0803 18:10:56.085348    5090 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 18:10:56.099108    5090 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-289000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-289000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.898648334s)

                                                
                                                
-- stdout --
	* [false-289000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-289000" primary control-plane node in "false-289000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-289000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 18:10:58.486374    5210 out.go:291] Setting OutFile to fd 1 ...
	I0803 18:10:58.486494    5210 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:10:58.486497    5210 out.go:304] Setting ErrFile to fd 2...
	I0803 18:10:58.486500    5210 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:10:58.486652    5210 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1166/.minikube/bin
	I0803 18:10:58.487713    5210 out.go:298] Setting JSON to false
	I0803 18:10:58.504525    5210 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4222,"bootTime":1722729636,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 18:10:58.504613    5210 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 18:10:58.510601    5210 out.go:177] * [false-289000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0803 18:10:58.518531    5210 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 18:10:58.518629    5210 notify.go:220] Checking for updates...
	I0803 18:10:58.525579    5210 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	I0803 18:10:58.528531    5210 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 18:10:58.533829    5210 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 18:10:58.536522    5210 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	I0803 18:10:58.539571    5210 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 18:10:58.542843    5210 config.go:182] Loaded profile config "multinode-483000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 18:10:58.542905    5210 config.go:182] Loaded profile config "stopped-upgrade-413000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0803 18:10:58.542944    5210 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 18:10:58.546505    5210 out.go:177] * Using the qemu2 driver based on user configuration
	I0803 18:10:58.553520    5210 start.go:297] selected driver: qemu2
	I0803 18:10:58.553526    5210 start.go:901] validating driver "qemu2" against <nil>
	I0803 18:10:58.553533    5210 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 18:10:58.555836    5210 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0803 18:10:58.558555    5210 out.go:177] * Automatically selected the socket_vmnet network
	I0803 18:10:58.561612    5210 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0803 18:10:58.561637    5210 cni.go:84] Creating CNI manager for "false"
	I0803 18:10:58.561686    5210 start.go:340] cluster config:
	{Name:false-289000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:false-289000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 18:10:58.565695    5210 iso.go:125] acquiring lock: {Name:mk5fb439c48468a4ea1254631a9a137d32075713 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 18:10:58.573505    5210 out.go:177] * Starting "false-289000" primary control-plane node in "false-289000" cluster
	I0803 18:10:58.577556    5210 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0803 18:10:58.577570    5210 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0803 18:10:58.577579    5210 cache.go:56] Caching tarball of preloaded images
	I0803 18:10:58.577633    5210 preload.go:172] Found /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0803 18:10:58.577638    5210 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0803 18:10:58.577694    5210 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/false-289000/config.json ...
	I0803 18:10:58.577705    5210 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/false-289000/config.json: {Name:mkafc9721e2b2b8bec00f40a8ad70ccaadd19357 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 18:10:58.578024    5210 start.go:360] acquireMachinesLock for false-289000: {Name:mk8118ba0191912f8ddd157ebc629e1eb9051406 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 18:10:58.578058    5210 start.go:364] duration metric: took 28.375µs to acquireMachinesLock for "false-289000"
	I0803 18:10:58.578070    5210 start.go:93] Provisioning new machine with config: &{Name:false-289000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:false-289000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 18:10:58.578093    5210 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 18:10:58.581506    5210 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0803 18:10:58.596645    5210 start.go:159] libmachine.API.Create for "false-289000" (driver="qemu2")
	I0803 18:10:58.596673    5210 client.go:168] LocalClient.Create starting
	I0803 18:10:58.596760    5210 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/ca.pem
	I0803 18:10:58.596808    5210 main.go:141] libmachine: Decoding PEM data...
	I0803 18:10:58.596817    5210 main.go:141] libmachine: Parsing certificate...
	I0803 18:10:58.596857    5210 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/cert.pem
	I0803 18:10:58.596881    5210 main.go:141] libmachine: Decoding PEM data...
	I0803 18:10:58.596888    5210 main.go:141] libmachine: Parsing certificate...
	I0803 18:10:58.597358    5210 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1166/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 18:10:58.753653    5210 main.go:141] libmachine: Creating SSH key...
	I0803 18:10:58.942419    5210 main.go:141] libmachine: Creating Disk image...
	I0803 18:10:58.942428    5210 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 18:10:58.942633    5210 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/false-289000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/false-289000/disk.qcow2
	I0803 18:10:58.953140    5210 main.go:141] libmachine: STDOUT: 
	I0803 18:10:58.953158    5210 main.go:141] libmachine: STDERR: 
	I0803 18:10:58.953235    5210 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/false-289000/disk.qcow2 +20000M
	I0803 18:10:58.962455    5210 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 18:10:58.962477    5210 main.go:141] libmachine: STDERR: 
	I0803 18:10:58.962498    5210 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/false-289000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/false-289000/disk.qcow2
	I0803 18:10:58.962503    5210 main.go:141] libmachine: Starting QEMU VM...
	I0803 18:10:58.962519    5210 qemu.go:418] Using hvf for hardware acceleration
	I0803 18:10:58.962552    5210 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/false-289000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/false-289000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/false-289000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:3a:be:a6:af:77 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/false-289000/disk.qcow2
	I0803 18:10:58.964859    5210 main.go:141] libmachine: STDOUT: 
	I0803 18:10:58.964882    5210 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 18:10:58.964907    5210 client.go:171] duration metric: took 368.237708ms to LocalClient.Create
	I0803 18:11:00.967047    5210 start.go:128] duration metric: took 2.388994125s to createHost
	I0803 18:11:00.967117    5210 start.go:83] releasing machines lock for "false-289000", held for 2.38911875s
	W0803 18:11:00.967218    5210 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 18:11:00.978145    5210 out.go:177] * Deleting "false-289000" in qemu2 ...
	W0803 18:11:01.001849    5210 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 18:11:01.001886    5210 start.go:729] Will try again in 5 seconds ...
	I0803 18:11:06.003926    5210 start.go:360] acquireMachinesLock for false-289000: {Name:mk8118ba0191912f8ddd157ebc629e1eb9051406 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 18:11:06.004247    5210 start.go:364] duration metric: took 256.125µs to acquireMachinesLock for "false-289000"
	I0803 18:11:06.004357    5210 start.go:93] Provisioning new machine with config: &{Name:false-289000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:false-289000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 18:11:06.004555    5210 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 18:11:06.012073    5210 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0803 18:11:06.051301    5210 start.go:159] libmachine.API.Create for "false-289000" (driver="qemu2")
	I0803 18:11:06.051349    5210 client.go:168] LocalClient.Create starting
	I0803 18:11:06.051455    5210 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/ca.pem
	I0803 18:11:06.051540    5210 main.go:141] libmachine: Decoding PEM data...
	I0803 18:11:06.051560    5210 main.go:141] libmachine: Parsing certificate...
	I0803 18:11:06.051621    5210 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/cert.pem
	I0803 18:11:06.051671    5210 main.go:141] libmachine: Decoding PEM data...
	I0803 18:11:06.051682    5210 main.go:141] libmachine: Parsing certificate...
	I0803 18:11:06.052160    5210 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1166/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 18:11:06.210699    5210 main.go:141] libmachine: Creating SSH key...
	I0803 18:11:06.305155    5210 main.go:141] libmachine: Creating Disk image...
	I0803 18:11:06.305167    5210 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 18:11:06.305386    5210 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/false-289000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/false-289000/disk.qcow2
	I0803 18:11:06.314969    5210 main.go:141] libmachine: STDOUT: 
	I0803 18:11:06.314989    5210 main.go:141] libmachine: STDERR: 
	I0803 18:11:06.315034    5210 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/false-289000/disk.qcow2 +20000M
	I0803 18:11:06.323164    5210 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 18:11:06.323180    5210 main.go:141] libmachine: STDERR: 
	I0803 18:11:06.323193    5210 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/false-289000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/false-289000/disk.qcow2
	I0803 18:11:06.323198    5210 main.go:141] libmachine: Starting QEMU VM...
	I0803 18:11:06.323209    5210 qemu.go:418] Using hvf for hardware acceleration
	I0803 18:11:06.323256    5210 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/false-289000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/false-289000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/false-289000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:ab:ae:36:95:b2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/false-289000/disk.qcow2
	I0803 18:11:06.324998    5210 main.go:141] libmachine: STDOUT: 
	I0803 18:11:06.325020    5210 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 18:11:06.325033    5210 client.go:171] duration metric: took 273.68625ms to LocalClient.Create
	I0803 18:11:08.327102    5210 start.go:128] duration metric: took 2.32258375s to createHost
	I0803 18:11:08.327134    5210 start.go:83] releasing machines lock for "false-289000", held for 2.322941041s
	W0803 18:11:08.327280    5210 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-289000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-289000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 18:11:08.334826    5210 out.go:177] 
	W0803 18:11:08.338849    5210 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 18:11:08.338869    5210 out.go:239] * 
	* 
	W0803 18:11:08.339592    5210 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 18:11:08.349829    5210 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (10.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-289000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-289000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (10.032245541s)

                                                
                                                
-- stdout --
	* [kindnet-289000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-289000" primary control-plane node in "kindnet-289000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-289000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 18:11:10.504307    5321 out.go:291] Setting OutFile to fd 1 ...
	I0803 18:11:10.504423    5321 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:11:10.504425    5321 out.go:304] Setting ErrFile to fd 2...
	I0803 18:11:10.504427    5321 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:11:10.504566    5321 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1166/.minikube/bin
	I0803 18:11:10.505655    5321 out.go:298] Setting JSON to false
	I0803 18:11:10.522029    5321 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4234,"bootTime":1722729636,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 18:11:10.522100    5321 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 18:11:10.529690    5321 out.go:177] * [kindnet-289000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0803 18:11:10.537654    5321 notify.go:220] Checking for updates...
	I0803 18:11:10.540636    5321 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 18:11:10.543699    5321 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	I0803 18:11:10.546623    5321 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 18:11:10.549550    5321 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 18:11:10.552678    5321 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	I0803 18:11:10.555671    5321 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 18:11:10.558958    5321 config.go:182] Loaded profile config "multinode-483000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 18:11:10.559025    5321 config.go:182] Loaded profile config "stopped-upgrade-413000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0803 18:11:10.559070    5321 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 18:11:10.563587    5321 out.go:177] * Using the qemu2 driver based on user configuration
	I0803 18:11:10.570622    5321 start.go:297] selected driver: qemu2
	I0803 18:11:10.570630    5321 start.go:901] validating driver "qemu2" against <nil>
	I0803 18:11:10.570636    5321 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 18:11:10.572865    5321 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0803 18:11:10.575640    5321 out.go:177] * Automatically selected the socket_vmnet network
	I0803 18:11:10.578701    5321 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0803 18:11:10.578743    5321 cni.go:84] Creating CNI manager for "kindnet"
	I0803 18:11:10.578747    5321 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0803 18:11:10.578788    5321 start.go:340] cluster config:
	{Name:kindnet-289000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:kindnet-289000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 18:11:10.583097    5321 iso.go:125] acquiring lock: {Name:mk5fb439c48468a4ea1254631a9a137d32075713 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 18:11:10.590677    5321 out.go:177] * Starting "kindnet-289000" primary control-plane node in "kindnet-289000" cluster
	I0803 18:11:10.594648    5321 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0803 18:11:10.594667    5321 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0803 18:11:10.594681    5321 cache.go:56] Caching tarball of preloaded images
	I0803 18:11:10.594775    5321 preload.go:172] Found /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0803 18:11:10.594781    5321 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0803 18:11:10.594842    5321 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/kindnet-289000/config.json ...
	I0803 18:11:10.594856    5321 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/kindnet-289000/config.json: {Name:mkc1fc4a47da53433ac966666963b83536dce318 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 18:11:10.595184    5321 start.go:360] acquireMachinesLock for kindnet-289000: {Name:mk8118ba0191912f8ddd157ebc629e1eb9051406 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 18:11:10.595214    5321 start.go:364] duration metric: took 25.458µs to acquireMachinesLock for "kindnet-289000"
	I0803 18:11:10.595224    5321 start.go:93] Provisioning new machine with config: &{Name:kindnet-289000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kindnet-289000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 18:11:10.595250    5321 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 18:11:10.602651    5321 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0803 18:11:10.618464    5321 start.go:159] libmachine.API.Create for "kindnet-289000" (driver="qemu2")
	I0803 18:11:10.618492    5321 client.go:168] LocalClient.Create starting
	I0803 18:11:10.618556    5321 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/ca.pem
	I0803 18:11:10.618593    5321 main.go:141] libmachine: Decoding PEM data...
	I0803 18:11:10.618600    5321 main.go:141] libmachine: Parsing certificate...
	I0803 18:11:10.618639    5321 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/cert.pem
	I0803 18:11:10.618663    5321 main.go:141] libmachine: Decoding PEM data...
	I0803 18:11:10.618672    5321 main.go:141] libmachine: Parsing certificate...
	I0803 18:11:10.619125    5321 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1166/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 18:11:10.770821    5321 main.go:141] libmachine: Creating SSH key...
	I0803 18:11:10.908829    5321 main.go:141] libmachine: Creating Disk image...
	I0803 18:11:10.908837    5321 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 18:11:10.909043    5321 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/kindnet-289000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/kindnet-289000/disk.qcow2
	I0803 18:11:10.918523    5321 main.go:141] libmachine: STDOUT: 
	I0803 18:11:10.918541    5321 main.go:141] libmachine: STDERR: 
	I0803 18:11:10.918589    5321 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/kindnet-289000/disk.qcow2 +20000M
	I0803 18:11:10.926520    5321 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 18:11:10.926535    5321 main.go:141] libmachine: STDERR: 
	I0803 18:11:10.926550    5321 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/kindnet-289000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/kindnet-289000/disk.qcow2
	I0803 18:11:10.926553    5321 main.go:141] libmachine: Starting QEMU VM...
	I0803 18:11:10.926567    5321 qemu.go:418] Using hvf for hardware acceleration
	I0803 18:11:10.926591    5321 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/kindnet-289000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/kindnet-289000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/kindnet-289000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:b9:3c:0c:85:b7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/kindnet-289000/disk.qcow2
	I0803 18:11:10.928221    5321 main.go:141] libmachine: STDOUT: 
	I0803 18:11:10.928235    5321 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 18:11:10.928255    5321 client.go:171] duration metric: took 309.766833ms to LocalClient.Create
	I0803 18:11:12.930410    5321 start.go:128] duration metric: took 2.335196375s to createHost
	I0803 18:11:12.930517    5321 start.go:83] releasing machines lock for "kindnet-289000", held for 2.335358833s
	W0803 18:11:12.930598    5321 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 18:11:12.946974    5321 out.go:177] * Deleting "kindnet-289000" in qemu2 ...
	W0803 18:11:12.974326    5321 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 18:11:12.974363    5321 start.go:729] Will try again in 5 seconds ...
	I0803 18:11:17.976086    5321 start.go:360] acquireMachinesLock for kindnet-289000: {Name:mk8118ba0191912f8ddd157ebc629e1eb9051406 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 18:11:17.976543    5321 start.go:364] duration metric: took 360.541µs to acquireMachinesLock for "kindnet-289000"
	I0803 18:11:17.976621    5321 start.go:93] Provisioning new machine with config: &{Name:kindnet-289000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kindnet-289000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 18:11:17.976881    5321 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 18:11:17.985413    5321 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0803 18:11:18.031442    5321 start.go:159] libmachine.API.Create for "kindnet-289000" (driver="qemu2")
	I0803 18:11:18.031491    5321 client.go:168] LocalClient.Create starting
	I0803 18:11:18.031617    5321 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/ca.pem
	I0803 18:11:18.031703    5321 main.go:141] libmachine: Decoding PEM data...
	I0803 18:11:18.031718    5321 main.go:141] libmachine: Parsing certificate...
	I0803 18:11:18.031780    5321 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/cert.pem
	I0803 18:11:18.031831    5321 main.go:141] libmachine: Decoding PEM data...
	I0803 18:11:18.031842    5321 main.go:141] libmachine: Parsing certificate...
	I0803 18:11:18.032481    5321 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1166/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 18:11:18.194035    5321 main.go:141] libmachine: Creating SSH key...
	I0803 18:11:18.449226    5321 main.go:141] libmachine: Creating Disk image...
	I0803 18:11:18.449240    5321 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 18:11:18.449502    5321 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/kindnet-289000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/kindnet-289000/disk.qcow2
	I0803 18:11:18.459427    5321 main.go:141] libmachine: STDOUT: 
	I0803 18:11:18.459449    5321 main.go:141] libmachine: STDERR: 
	I0803 18:11:18.459506    5321 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/kindnet-289000/disk.qcow2 +20000M
	I0803 18:11:18.467667    5321 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 18:11:18.467686    5321 main.go:141] libmachine: STDERR: 
	I0803 18:11:18.467706    5321 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/kindnet-289000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/kindnet-289000/disk.qcow2
	I0803 18:11:18.467711    5321 main.go:141] libmachine: Starting QEMU VM...
	I0803 18:11:18.467718    5321 qemu.go:418] Using hvf for hardware acceleration
	I0803 18:11:18.467753    5321 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/kindnet-289000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/kindnet-289000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/kindnet-289000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:4e:69:37:9d:b8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/kindnet-289000/disk.qcow2
	I0803 18:11:18.469584    5321 main.go:141] libmachine: STDOUT: 
	I0803 18:11:18.469599    5321 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 18:11:18.469612    5321 client.go:171] duration metric: took 438.127416ms to LocalClient.Create
	I0803 18:11:20.471773    5321 start.go:128] duration metric: took 2.494916541s to createHost
	I0803 18:11:20.471907    5321 start.go:83] releasing machines lock for "kindnet-289000", held for 2.495412875s
	W0803 18:11:20.472277    5321 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-289000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-289000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 18:11:20.480971    5321 out.go:177] 
	W0803 18:11:20.485333    5321 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 18:11:20.485363    5321 out.go:239] * 
	* 
	W0803 18:11:20.492595    5321 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 18:11:20.496948    5321 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (10.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-289000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
E0803 18:11:23.552473    1673 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/functional-959000/client.crt: no such file or directory
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-289000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.913575834s)

                                                
                                                
-- stdout --
	* [flannel-289000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-289000" primary control-plane node in "flannel-289000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-289000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 18:11:22.808593    5434 out.go:291] Setting OutFile to fd 1 ...
	I0803 18:11:22.808746    5434 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:11:22.808750    5434 out.go:304] Setting ErrFile to fd 2...
	I0803 18:11:22.808752    5434 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:11:22.808888    5434 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1166/.minikube/bin
	I0803 18:11:22.810123    5434 out.go:298] Setting JSON to false
	I0803 18:11:22.827910    5434 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4246,"bootTime":1722729636,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 18:11:22.827983    5434 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 18:11:22.832390    5434 out.go:177] * [flannel-289000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0803 18:11:22.839239    5434 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 18:11:22.839367    5434 notify.go:220] Checking for updates...
	I0803 18:11:22.846205    5434 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	I0803 18:11:22.849235    5434 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 18:11:22.852210    5434 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 18:11:22.855206    5434 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	I0803 18:11:22.858195    5434 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 18:11:22.861677    5434 config.go:182] Loaded profile config "multinode-483000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 18:11:22.861746    5434 config.go:182] Loaded profile config "stopped-upgrade-413000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0803 18:11:22.861798    5434 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 18:11:22.865093    5434 out.go:177] * Using the qemu2 driver based on user configuration
	I0803 18:11:22.872196    5434 start.go:297] selected driver: qemu2
	I0803 18:11:22.872203    5434 start.go:901] validating driver "qemu2" against <nil>
	I0803 18:11:22.872210    5434 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 18:11:22.874635    5434 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0803 18:11:22.875884    5434 out.go:177] * Automatically selected the socket_vmnet network
	I0803 18:11:22.878289    5434 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0803 18:11:22.878330    5434 cni.go:84] Creating CNI manager for "flannel"
	I0803 18:11:22.878334    5434 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0803 18:11:22.878376    5434 start.go:340] cluster config:
	{Name:flannel-289000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:flannel-289000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 18:11:22.882078    5434 iso.go:125] acquiring lock: {Name:mk5fb439c48468a4ea1254631a9a137d32075713 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 18:11:22.885196    5434 out.go:177] * Starting "flannel-289000" primary control-plane node in "flannel-289000" cluster
	I0803 18:11:22.893181    5434 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0803 18:11:22.893214    5434 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0803 18:11:22.893226    5434 cache.go:56] Caching tarball of preloaded images
	I0803 18:11:22.893323    5434 preload.go:172] Found /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0803 18:11:22.893329    5434 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0803 18:11:22.893392    5434 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/flannel-289000/config.json ...
	I0803 18:11:22.893403    5434 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/flannel-289000/config.json: {Name:mk08df93a85774a49f7c60acb8e5b89cd283b516 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 18:11:22.893686    5434 start.go:360] acquireMachinesLock for flannel-289000: {Name:mk8118ba0191912f8ddd157ebc629e1eb9051406 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 18:11:22.893717    5434 start.go:364] duration metric: took 25.875µs to acquireMachinesLock for "flannel-289000"
	I0803 18:11:22.893727    5434 start.go:93] Provisioning new machine with config: &{Name:flannel-289000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:flannel-289000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 18:11:22.893757    5434 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 18:11:22.898237    5434 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0803 18:11:22.915000    5434 start.go:159] libmachine.API.Create for "flannel-289000" (driver="qemu2")
	I0803 18:11:22.915019    5434 client.go:168] LocalClient.Create starting
	I0803 18:11:22.915096    5434 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/ca.pem
	I0803 18:11:22.915126    5434 main.go:141] libmachine: Decoding PEM data...
	I0803 18:11:22.915134    5434 main.go:141] libmachine: Parsing certificate...
	I0803 18:11:22.915166    5434 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/cert.pem
	I0803 18:11:22.915192    5434 main.go:141] libmachine: Decoding PEM data...
	I0803 18:11:22.915200    5434 main.go:141] libmachine: Parsing certificate...
	I0803 18:11:22.915516    5434 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1166/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 18:11:23.077815    5434 main.go:141] libmachine: Creating SSH key...
	I0803 18:11:23.286198    5434 main.go:141] libmachine: Creating Disk image...
	I0803 18:11:23.286211    5434 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 18:11:23.286468    5434 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/flannel-289000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/flannel-289000/disk.qcow2
	I0803 18:11:23.296378    5434 main.go:141] libmachine: STDOUT: 
	I0803 18:11:23.296413    5434 main.go:141] libmachine: STDERR: 
	I0803 18:11:23.296474    5434 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/flannel-289000/disk.qcow2 +20000M
	I0803 18:11:23.304440    5434 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 18:11:23.304456    5434 main.go:141] libmachine: STDERR: 
	I0803 18:11:23.304473    5434 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/flannel-289000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/flannel-289000/disk.qcow2
	I0803 18:11:23.304478    5434 main.go:141] libmachine: Starting QEMU VM...
	I0803 18:11:23.304488    5434 qemu.go:418] Using hvf for hardware acceleration
	I0803 18:11:23.304515    5434 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/flannel-289000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/flannel-289000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/flannel-289000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:c4:ff:98:2f:28 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/flannel-289000/disk.qcow2
	I0803 18:11:23.306194    5434 main.go:141] libmachine: STDOUT: 
	I0803 18:11:23.306207    5434 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 18:11:23.306229    5434 client.go:171] duration metric: took 391.216875ms to LocalClient.Create
	I0803 18:11:25.308369    5434 start.go:128] duration metric: took 2.414645917s to createHost
	I0803 18:11:25.308438    5434 start.go:83] releasing machines lock for "flannel-289000", held for 2.414779375s
	W0803 18:11:25.308507    5434 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 18:11:25.318521    5434 out.go:177] * Deleting "flannel-289000" in qemu2 ...
	W0803 18:11:25.344050    5434 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 18:11:25.344083    5434 start.go:729] Will try again in 5 seconds ...
	I0803 18:11:30.345947    5434 start.go:360] acquireMachinesLock for flannel-289000: {Name:mk8118ba0191912f8ddd157ebc629e1eb9051406 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 18:11:30.346577    5434 start.go:364] duration metric: took 482.334µs to acquireMachinesLock for "flannel-289000"
	I0803 18:11:30.346745    5434 start.go:93] Provisioning new machine with config: &{Name:flannel-289000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:flannel-289000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 18:11:30.347094    5434 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 18:11:30.352752    5434 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0803 18:11:30.402632    5434 start.go:159] libmachine.API.Create for "flannel-289000" (driver="qemu2")
	I0803 18:11:30.402681    5434 client.go:168] LocalClient.Create starting
	I0803 18:11:30.402816    5434 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/ca.pem
	I0803 18:11:30.402889    5434 main.go:141] libmachine: Decoding PEM data...
	I0803 18:11:30.402906    5434 main.go:141] libmachine: Parsing certificate...
	I0803 18:11:30.402997    5434 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/cert.pem
	I0803 18:11:30.403063    5434 main.go:141] libmachine: Decoding PEM data...
	I0803 18:11:30.403107    5434 main.go:141] libmachine: Parsing certificate...
	I0803 18:11:30.403627    5434 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1166/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 18:11:30.566884    5434 main.go:141] libmachine: Creating SSH key...
	I0803 18:11:30.630995    5434 main.go:141] libmachine: Creating Disk image...
	I0803 18:11:30.631008    5434 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 18:11:30.631226    5434 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/flannel-289000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/flannel-289000/disk.qcow2
	I0803 18:11:30.641076    5434 main.go:141] libmachine: STDOUT: 
	I0803 18:11:30.641095    5434 main.go:141] libmachine: STDERR: 
	I0803 18:11:30.641177    5434 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/flannel-289000/disk.qcow2 +20000M
	I0803 18:11:30.650139    5434 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 18:11:30.650170    5434 main.go:141] libmachine: STDERR: 
	I0803 18:11:30.650180    5434 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/flannel-289000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/flannel-289000/disk.qcow2
	I0803 18:11:30.650187    5434 main.go:141] libmachine: Starting QEMU VM...
	I0803 18:11:30.650194    5434 qemu.go:418] Using hvf for hardware acceleration
	I0803 18:11:30.650221    5434 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/flannel-289000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/flannel-289000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/flannel-289000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:3e:bd:f5:6c:9a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/flannel-289000/disk.qcow2
	I0803 18:11:30.652238    5434 main.go:141] libmachine: STDOUT: 
	I0803 18:11:30.652254    5434 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 18:11:30.652266    5434 client.go:171] duration metric: took 249.587875ms to LocalClient.Create
	I0803 18:11:32.654535    5434 start.go:128] duration metric: took 2.307441166s to createHost
	I0803 18:11:32.654646    5434 start.go:83] releasing machines lock for "flannel-289000", held for 2.3081115s
	W0803 18:11:32.655018    5434 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-289000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-289000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 18:11:32.670570    5434 out.go:177] 
	W0803 18:11:32.674686    5434 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 18:11:32.674740    5434 out.go:239] * 
	* 
	W0803 18:11:32.676556    5434 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 18:11:32.683657    5434 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (10.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-289000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-289000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (10.00645475s)

                                                
                                                
-- stdout --
	* [enable-default-cni-289000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-289000" primary control-plane node in "enable-default-cni-289000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-289000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 18:11:35.038445    5553 out.go:291] Setting OutFile to fd 1 ...
	I0803 18:11:35.038589    5553 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:11:35.038596    5553 out.go:304] Setting ErrFile to fd 2...
	I0803 18:11:35.038599    5553 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:11:35.038730    5553 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1166/.minikube/bin
	I0803 18:11:35.039845    5553 out.go:298] Setting JSON to false
	I0803 18:11:35.055810    5553 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4259,"bootTime":1722729636,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 18:11:35.055881    5553 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 18:11:35.062234    5553 out.go:177] * [enable-default-cni-289000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0803 18:11:35.070075    5553 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 18:11:35.070148    5553 notify.go:220] Checking for updates...
	I0803 18:11:35.076194    5553 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	I0803 18:11:35.079127    5553 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 18:11:35.082185    5553 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 18:11:35.085182    5553 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	I0803 18:11:35.088157    5553 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 18:11:35.091537    5553 config.go:182] Loaded profile config "multinode-483000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 18:11:35.091598    5553 config.go:182] Loaded profile config "stopped-upgrade-413000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0803 18:11:35.091639    5553 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 18:11:35.096163    5553 out.go:177] * Using the qemu2 driver based on user configuration
	I0803 18:11:35.103174    5553 start.go:297] selected driver: qemu2
	I0803 18:11:35.103183    5553 start.go:901] validating driver "qemu2" against <nil>
	I0803 18:11:35.103193    5553 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 18:11:35.105473    5553 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0803 18:11:35.108179    5553 out.go:177] * Automatically selected the socket_vmnet network
	E0803 18:11:35.109589    5553 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0803 18:11:35.109602    5553 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0803 18:11:35.109647    5553 cni.go:84] Creating CNI manager for "bridge"
	I0803 18:11:35.109651    5553 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0803 18:11:35.109685    5553 start.go:340] cluster config:
	{Name:enable-default-cni-289000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-289000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 18:11:35.113326    5553 iso.go:125] acquiring lock: {Name:mk5fb439c48468a4ea1254631a9a137d32075713 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 18:11:35.122173    5553 out.go:177] * Starting "enable-default-cni-289000" primary control-plane node in "enable-default-cni-289000" cluster
	I0803 18:11:35.126186    5553 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0803 18:11:35.126210    5553 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0803 18:11:35.126222    5553 cache.go:56] Caching tarball of preloaded images
	I0803 18:11:35.126285    5553 preload.go:172] Found /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0803 18:11:35.126298    5553 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0803 18:11:35.126356    5553 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/enable-default-cni-289000/config.json ...
	I0803 18:11:35.126368    5553 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/enable-default-cni-289000/config.json: {Name:mkb8709d61e9c3185307188b1fef11c0391717f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 18:11:35.126580    5553 start.go:360] acquireMachinesLock for enable-default-cni-289000: {Name:mk8118ba0191912f8ddd157ebc629e1eb9051406 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 18:11:35.126620    5553 start.go:364] duration metric: took 27.541µs to acquireMachinesLock for "enable-default-cni-289000"
	I0803 18:11:35.126634    5553 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-289000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-289000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 18:11:35.126679    5553 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 18:11:35.135126    5553 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0803 18:11:35.152898    5553 start.go:159] libmachine.API.Create for "enable-default-cni-289000" (driver="qemu2")
	I0803 18:11:35.152923    5553 client.go:168] LocalClient.Create starting
	I0803 18:11:35.152991    5553 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/ca.pem
	I0803 18:11:35.153024    5553 main.go:141] libmachine: Decoding PEM data...
	I0803 18:11:35.153033    5553 main.go:141] libmachine: Parsing certificate...
	I0803 18:11:35.153073    5553 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/cert.pem
	I0803 18:11:35.153099    5553 main.go:141] libmachine: Decoding PEM data...
	I0803 18:11:35.153106    5553 main.go:141] libmachine: Parsing certificate...
	I0803 18:11:35.153463    5553 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1166/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 18:11:35.304982    5553 main.go:141] libmachine: Creating SSH key...
	I0803 18:11:35.645556    5553 main.go:141] libmachine: Creating Disk image...
	I0803 18:11:35.645571    5553 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 18:11:35.645835    5553 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/enable-default-cni-289000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/enable-default-cni-289000/disk.qcow2
	I0803 18:11:35.655817    5553 main.go:141] libmachine: STDOUT: 
	I0803 18:11:35.655840    5553 main.go:141] libmachine: STDERR: 
	I0803 18:11:35.655889    5553 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/enable-default-cni-289000/disk.qcow2 +20000M
	I0803 18:11:35.664080    5553 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 18:11:35.664092    5553 main.go:141] libmachine: STDERR: 
	I0803 18:11:35.664110    5553 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/enable-default-cni-289000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/enable-default-cni-289000/disk.qcow2
	I0803 18:11:35.664115    5553 main.go:141] libmachine: Starting QEMU VM...
	I0803 18:11:35.664129    5553 qemu.go:418] Using hvf for hardware acceleration
	I0803 18:11:35.664162    5553 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/enable-default-cni-289000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/enable-default-cni-289000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/enable-default-cni-289000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:42:29:a7:cf:47 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/enable-default-cni-289000/disk.qcow2
	I0803 18:11:35.665978    5553 main.go:141] libmachine: STDOUT: 
	I0803 18:11:35.665994    5553 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 18:11:35.666014    5553 client.go:171] duration metric: took 513.100333ms to LocalClient.Create
	I0803 18:11:37.668178    5553 start.go:128] duration metric: took 2.54153775s to createHost
	I0803 18:11:37.668241    5553 start.go:83] releasing machines lock for "enable-default-cni-289000", held for 2.541684s
	W0803 18:11:37.668366    5553 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 18:11:37.682421    5553 out.go:177] * Deleting "enable-default-cni-289000" in qemu2 ...
	W0803 18:11:37.703418    5553 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 18:11:37.703436    5553 start.go:729] Will try again in 5 seconds ...
	I0803 18:11:42.705572    5553 start.go:360] acquireMachinesLock for enable-default-cni-289000: {Name:mk8118ba0191912f8ddd157ebc629e1eb9051406 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 18:11:42.706096    5553 start.go:364] duration metric: took 396.667µs to acquireMachinesLock for "enable-default-cni-289000"
	I0803 18:11:42.706176    5553 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-289000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-289000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 18:11:42.706410    5553 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 18:11:42.714908    5553 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0803 18:11:42.759888    5553 start.go:159] libmachine.API.Create for "enable-default-cni-289000" (driver="qemu2")
	I0803 18:11:42.759937    5553 client.go:168] LocalClient.Create starting
	I0803 18:11:42.760064    5553 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/ca.pem
	I0803 18:11:42.760131    5553 main.go:141] libmachine: Decoding PEM data...
	I0803 18:11:42.760146    5553 main.go:141] libmachine: Parsing certificate...
	I0803 18:11:42.760198    5553 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/cert.pem
	I0803 18:11:42.760240    5553 main.go:141] libmachine: Decoding PEM data...
	I0803 18:11:42.760252    5553 main.go:141] libmachine: Parsing certificate...
	I0803 18:11:42.760822    5553 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1166/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 18:11:42.919444    5553 main.go:141] libmachine: Creating SSH key...
	I0803 18:11:42.956390    5553 main.go:141] libmachine: Creating Disk image...
	I0803 18:11:42.956396    5553 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 18:11:42.956596    5553 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/enable-default-cni-289000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/enable-default-cni-289000/disk.qcow2
	I0803 18:11:42.965815    5553 main.go:141] libmachine: STDOUT: 
	I0803 18:11:42.965836    5553 main.go:141] libmachine: STDERR: 
	I0803 18:11:42.965902    5553 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/enable-default-cni-289000/disk.qcow2 +20000M
	I0803 18:11:42.974047    5553 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 18:11:42.974062    5553 main.go:141] libmachine: STDERR: 
	I0803 18:11:42.974073    5553 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/enable-default-cni-289000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/enable-default-cni-289000/disk.qcow2
	I0803 18:11:42.974080    5553 main.go:141] libmachine: Starting QEMU VM...
	I0803 18:11:42.974090    5553 qemu.go:418] Using hvf for hardware acceleration
	I0803 18:11:42.974118    5553 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/enable-default-cni-289000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/enable-default-cni-289000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/enable-default-cni-289000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:87:e1:6a:4e:fd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/enable-default-cni-289000/disk.qcow2
	I0803 18:11:42.975814    5553 main.go:141] libmachine: STDOUT: 
	I0803 18:11:42.975831    5553 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 18:11:42.975851    5553 client.go:171] duration metric: took 215.915667ms to LocalClient.Create
	I0803 18:11:44.978022    5553 start.go:128] duration metric: took 2.271633625s to createHost
	I0803 18:11:44.978098    5553 start.go:83] releasing machines lock for "enable-default-cni-289000", held for 2.272041834s
	W0803 18:11:44.978478    5553 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-289000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-289000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 18:11:44.989649    5553 out.go:177] 
	W0803 18:11:44.993066    5553 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 18:11:44.993082    5553 out.go:239] * 
	* 
	W0803 18:11:44.994749    5553 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 18:11:45.003406    5553 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (10.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-289000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-289000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.862459667s)

                                                
                                                
-- stdout --
	* [bridge-289000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-289000" primary control-plane node in "bridge-289000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-289000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 18:11:47.232047    5665 out.go:291] Setting OutFile to fd 1 ...
	I0803 18:11:47.232185    5665 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:11:47.232189    5665 out.go:304] Setting ErrFile to fd 2...
	I0803 18:11:47.232191    5665 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:11:47.232318    5665 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1166/.minikube/bin
	I0803 18:11:47.233454    5665 out.go:298] Setting JSON to false
	I0803 18:11:47.249882    5665 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4271,"bootTime":1722729636,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 18:11:47.249954    5665 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 18:11:47.256089    5665 out.go:177] * [bridge-289000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0803 18:11:47.265051    5665 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 18:11:47.265091    5665 notify.go:220] Checking for updates...
	I0803 18:11:47.272075    5665 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	I0803 18:11:47.275008    5665 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 18:11:47.278037    5665 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 18:11:47.281089    5665 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	I0803 18:11:47.284056    5665 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 18:11:47.287388    5665 config.go:182] Loaded profile config "multinode-483000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 18:11:47.287457    5665 config.go:182] Loaded profile config "stopped-upgrade-413000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0803 18:11:47.287518    5665 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 18:11:47.291053    5665 out.go:177] * Using the qemu2 driver based on user configuration
	I0803 18:11:47.297997    5665 start.go:297] selected driver: qemu2
	I0803 18:11:47.298005    5665 start.go:901] validating driver "qemu2" against <nil>
	I0803 18:11:47.298010    5665 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 18:11:47.300379    5665 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0803 18:11:47.303045    5665 out.go:177] * Automatically selected the socket_vmnet network
	I0803 18:11:47.304497    5665 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0803 18:11:47.304557    5665 cni.go:84] Creating CNI manager for "bridge"
	I0803 18:11:47.304561    5665 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0803 18:11:47.304585    5665 start.go:340] cluster config:
	{Name:bridge-289000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:bridge-289000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 18:11:47.308102    5665 iso.go:125] acquiring lock: {Name:mk5fb439c48468a4ea1254631a9a137d32075713 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 18:11:47.315030    5665 out.go:177] * Starting "bridge-289000" primary control-plane node in "bridge-289000" cluster
	I0803 18:11:47.318929    5665 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0803 18:11:47.318942    5665 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0803 18:11:47.318953    5665 cache.go:56] Caching tarball of preloaded images
	I0803 18:11:47.319007    5665 preload.go:172] Found /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0803 18:11:47.319015    5665 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0803 18:11:47.319065    5665 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/bridge-289000/config.json ...
	I0803 18:11:47.319076    5665 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/bridge-289000/config.json: {Name:mk631acbfbdcf94f5a6e3ceb97ee73c5c6af2372 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 18:11:47.319303    5665 start.go:360] acquireMachinesLock for bridge-289000: {Name:mk8118ba0191912f8ddd157ebc629e1eb9051406 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 18:11:47.319343    5665 start.go:364] duration metric: took 32.833µs to acquireMachinesLock for "bridge-289000"
	I0803 18:11:47.319356    5665 start.go:93] Provisioning new machine with config: &{Name:bridge-289000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:bridge-289000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 18:11:47.319383    5665 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 18:11:47.327021    5665 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0803 18:11:47.342110    5665 start.go:159] libmachine.API.Create for "bridge-289000" (driver="qemu2")
	I0803 18:11:47.342136    5665 client.go:168] LocalClient.Create starting
	I0803 18:11:47.342204    5665 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/ca.pem
	I0803 18:11:47.342235    5665 main.go:141] libmachine: Decoding PEM data...
	I0803 18:11:47.342243    5665 main.go:141] libmachine: Parsing certificate...
	I0803 18:11:47.342278    5665 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/cert.pem
	I0803 18:11:47.342302    5665 main.go:141] libmachine: Decoding PEM data...
	I0803 18:11:47.342309    5665 main.go:141] libmachine: Parsing certificate...
	I0803 18:11:47.342739    5665 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1166/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 18:11:47.495539    5665 main.go:141] libmachine: Creating SSH key...
	I0803 18:11:47.621841    5665 main.go:141] libmachine: Creating Disk image...
	I0803 18:11:47.621853    5665 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 18:11:47.622072    5665 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/bridge-289000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/bridge-289000/disk.qcow2
	I0803 18:11:47.631686    5665 main.go:141] libmachine: STDOUT: 
	I0803 18:11:47.631703    5665 main.go:141] libmachine: STDERR: 
	I0803 18:11:47.631752    5665 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/bridge-289000/disk.qcow2 +20000M
	I0803 18:11:47.639648    5665 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 18:11:47.639660    5665 main.go:141] libmachine: STDERR: 
	I0803 18:11:47.639674    5665 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/bridge-289000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/bridge-289000/disk.qcow2
	I0803 18:11:47.639677    5665 main.go:141] libmachine: Starting QEMU VM...
	I0803 18:11:47.639690    5665 qemu.go:418] Using hvf for hardware acceleration
	I0803 18:11:47.639725    5665 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/bridge-289000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/bridge-289000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/bridge-289000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:81:f0:b1:8d:8b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/bridge-289000/disk.qcow2
	I0803 18:11:47.641391    5665 main.go:141] libmachine: STDOUT: 
	I0803 18:11:47.641411    5665 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 18:11:47.641427    5665 client.go:171] duration metric: took 299.296042ms to LocalClient.Create
	I0803 18:11:49.643589    5665 start.go:128] duration metric: took 2.324232125s to createHost
	I0803 18:11:49.643674    5665 start.go:83] releasing machines lock for "bridge-289000", held for 2.324388333s
	W0803 18:11:49.643835    5665 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 18:11:49.657100    5665 out.go:177] * Deleting "bridge-289000" in qemu2 ...
	W0803 18:11:49.685670    5665 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 18:11:49.685713    5665 start.go:729] Will try again in 5 seconds ...
	I0803 18:11:54.687727    5665 start.go:360] acquireMachinesLock for bridge-289000: {Name:mk8118ba0191912f8ddd157ebc629e1eb9051406 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 18:11:54.688018    5665 start.go:364] duration metric: took 245µs to acquireMachinesLock for "bridge-289000"
	I0803 18:11:54.688054    5665 start.go:93] Provisioning new machine with config: &{Name:bridge-289000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:bridge-289000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 18:11:54.688189    5665 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 18:11:54.695608    5665 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0803 18:11:54.726471    5665 start.go:159] libmachine.API.Create for "bridge-289000" (driver="qemu2")
	I0803 18:11:54.726514    5665 client.go:168] LocalClient.Create starting
	I0803 18:11:54.726608    5665 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/ca.pem
	I0803 18:11:54.726665    5665 main.go:141] libmachine: Decoding PEM data...
	I0803 18:11:54.726678    5665 main.go:141] libmachine: Parsing certificate...
	I0803 18:11:54.726740    5665 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/cert.pem
	I0803 18:11:54.726778    5665 main.go:141] libmachine: Decoding PEM data...
	I0803 18:11:54.726787    5665 main.go:141] libmachine: Parsing certificate...
	I0803 18:11:54.727233    5665 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1166/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 18:11:54.883002    5665 main.go:141] libmachine: Creating SSH key...
	I0803 18:11:55.004797    5665 main.go:141] libmachine: Creating Disk image...
	I0803 18:11:55.004805    5665 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 18:11:55.004996    5665 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/bridge-289000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/bridge-289000/disk.qcow2
	I0803 18:11:55.014371    5665 main.go:141] libmachine: STDOUT: 
	I0803 18:11:55.014395    5665 main.go:141] libmachine: STDERR: 
	I0803 18:11:55.014452    5665 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/bridge-289000/disk.qcow2 +20000M
	I0803 18:11:55.022596    5665 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 18:11:55.022611    5665 main.go:141] libmachine: STDERR: 
	I0803 18:11:55.022624    5665 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/bridge-289000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/bridge-289000/disk.qcow2
	I0803 18:11:55.022627    5665 main.go:141] libmachine: Starting QEMU VM...
	I0803 18:11:55.022637    5665 qemu.go:418] Using hvf for hardware acceleration
	I0803 18:11:55.022676    5665 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/bridge-289000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/bridge-289000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/bridge-289000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:99:a3:a0:07:8d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/bridge-289000/disk.qcow2
	I0803 18:11:55.024414    5665 main.go:141] libmachine: STDOUT: 
	I0803 18:11:55.024431    5665 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 18:11:55.024447    5665 client.go:171] duration metric: took 297.935875ms to LocalClient.Create
	I0803 18:11:57.026573    5665 start.go:128] duration metric: took 2.338424667s to createHost
	I0803 18:11:57.026620    5665 start.go:83] releasing machines lock for "bridge-289000", held for 2.338651875s
	W0803 18:11:57.027079    5665 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-289000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-289000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 18:11:57.040243    5665 out.go:177] 
	W0803 18:11:57.043740    5665 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 18:11:57.043762    5665 out.go:239] * 
	* 
	W0803 18:11:57.045412    5665 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 18:11:57.055668    5665 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-289000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-289000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.825516042s)

                                                
                                                
-- stdout --
	* [kubenet-289000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-289000" primary control-plane node in "kubenet-289000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-289000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 18:11:59.226705    5774 out.go:291] Setting OutFile to fd 1 ...
	I0803 18:11:59.226850    5774 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:11:59.226853    5774 out.go:304] Setting ErrFile to fd 2...
	I0803 18:11:59.226856    5774 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:11:59.226995    5774 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1166/.minikube/bin
	I0803 18:11:59.228021    5774 out.go:298] Setting JSON to false
	I0803 18:11:59.244314    5774 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4283,"bootTime":1722729636,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 18:11:59.244390    5774 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 18:11:59.250253    5774 out.go:177] * [kubenet-289000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0803 18:11:59.258057    5774 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 18:11:59.258116    5774 notify.go:220] Checking for updates...
	I0803 18:11:59.264930    5774 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	I0803 18:11:59.267931    5774 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 18:11:59.271018    5774 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 18:11:59.273969    5774 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	I0803 18:11:59.276971    5774 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 18:11:59.280436    5774 config.go:182] Loaded profile config "multinode-483000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 18:11:59.280504    5774 config.go:182] Loaded profile config "stopped-upgrade-413000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0803 18:11:59.280545    5774 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 18:11:59.284911    5774 out.go:177] * Using the qemu2 driver based on user configuration
	I0803 18:11:59.291995    5774 start.go:297] selected driver: qemu2
	I0803 18:11:59.292001    5774 start.go:901] validating driver "qemu2" against <nil>
	I0803 18:11:59.292007    5774 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 18:11:59.294202    5774 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0803 18:11:59.297952    5774 out.go:177] * Automatically selected the socket_vmnet network
	I0803 18:11:59.301052    5774 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0803 18:11:59.301066    5774 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0803 18:11:59.301089    5774 start.go:340] cluster config:
	{Name:kubenet-289000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:kubenet-289000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 18:11:59.304584    5774 iso.go:125] acquiring lock: {Name:mk5fb439c48468a4ea1254631a9a137d32075713 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 18:11:59.313016    5774 out.go:177] * Starting "kubenet-289000" primary control-plane node in "kubenet-289000" cluster
	I0803 18:11:59.316978    5774 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0803 18:11:59.316991    5774 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0803 18:11:59.317000    5774 cache.go:56] Caching tarball of preloaded images
	I0803 18:11:59.317053    5774 preload.go:172] Found /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0803 18:11:59.317059    5774 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0803 18:11:59.317114    5774 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/kubenet-289000/config.json ...
	I0803 18:11:59.317124    5774 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/kubenet-289000/config.json: {Name:mk131b84072e8d684e8cccb11a14ad437c07b97d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 18:11:59.317415    5774 start.go:360] acquireMachinesLock for kubenet-289000: {Name:mk8118ba0191912f8ddd157ebc629e1eb9051406 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 18:11:59.317446    5774 start.go:364] duration metric: took 26µs to acquireMachinesLock for "kubenet-289000"
	I0803 18:11:59.317456    5774 start.go:93] Provisioning new machine with config: &{Name:kubenet-289000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kubenet-289000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 18:11:59.317492    5774 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 18:11:59.324948    5774 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0803 18:11:59.340851    5774 start.go:159] libmachine.API.Create for "kubenet-289000" (driver="qemu2")
	I0803 18:11:59.340875    5774 client.go:168] LocalClient.Create starting
	I0803 18:11:59.340937    5774 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/ca.pem
	I0803 18:11:59.340965    5774 main.go:141] libmachine: Decoding PEM data...
	I0803 18:11:59.340977    5774 main.go:141] libmachine: Parsing certificate...
	I0803 18:11:59.341013    5774 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/cert.pem
	I0803 18:11:59.341037    5774 main.go:141] libmachine: Decoding PEM data...
	I0803 18:11:59.341045    5774 main.go:141] libmachine: Parsing certificate...
	I0803 18:11:59.341390    5774 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1166/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 18:11:59.492596    5774 main.go:141] libmachine: Creating SSH key...
	I0803 18:11:59.569758    5774 main.go:141] libmachine: Creating Disk image...
	I0803 18:11:59.569764    5774 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 18:11:59.569940    5774 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/kubenet-289000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/kubenet-289000/disk.qcow2
	I0803 18:11:59.579222    5774 main.go:141] libmachine: STDOUT: 
	I0803 18:11:59.579238    5774 main.go:141] libmachine: STDERR: 
	I0803 18:11:59.579291    5774 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/kubenet-289000/disk.qcow2 +20000M
	I0803 18:11:59.587393    5774 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 18:11:59.587422    5774 main.go:141] libmachine: STDERR: 
	I0803 18:11:59.587438    5774 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/kubenet-289000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/kubenet-289000/disk.qcow2
	I0803 18:11:59.587443    5774 main.go:141] libmachine: Starting QEMU VM...
	I0803 18:11:59.587451    5774 qemu.go:418] Using hvf for hardware acceleration
	I0803 18:11:59.587479    5774 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/kubenet-289000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/kubenet-289000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/kubenet-289000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:dd:c8:4d:43:d7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/kubenet-289000/disk.qcow2
	I0803 18:11:59.589210    5774 main.go:141] libmachine: STDOUT: 
	I0803 18:11:59.589225    5774 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 18:11:59.589243    5774 client.go:171] duration metric: took 248.370208ms to LocalClient.Create
	I0803 18:12:01.591389    5774 start.go:128] duration metric: took 2.273932667s to createHost
	I0803 18:12:01.591482    5774 start.go:83] releasing machines lock for "kubenet-289000", held for 2.274090459s
	W0803 18:12:01.591618    5774 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 18:12:01.603941    5774 out.go:177] * Deleting "kubenet-289000" in qemu2 ...
	W0803 18:12:01.634850    5774 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 18:12:01.634877    5774 start.go:729] Will try again in 5 seconds ...
	I0803 18:12:06.637029    5774 start.go:360] acquireMachinesLock for kubenet-289000: {Name:mk8118ba0191912f8ddd157ebc629e1eb9051406 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 18:12:06.637702    5774 start.go:364] duration metric: took 499.625µs to acquireMachinesLock for "kubenet-289000"
	I0803 18:12:06.637865    5774 start.go:93] Provisioning new machine with config: &{Name:kubenet-289000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kubenet-289000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 18:12:06.638159    5774 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 18:12:06.646824    5774 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0803 18:12:06.697078    5774 start.go:159] libmachine.API.Create for "kubenet-289000" (driver="qemu2")
	I0803 18:12:06.697132    5774 client.go:168] LocalClient.Create starting
	I0803 18:12:06.697269    5774 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/ca.pem
	I0803 18:12:06.697350    5774 main.go:141] libmachine: Decoding PEM data...
	I0803 18:12:06.697367    5774 main.go:141] libmachine: Parsing certificate...
	I0803 18:12:06.697431    5774 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/cert.pem
	I0803 18:12:06.697478    5774 main.go:141] libmachine: Decoding PEM data...
	I0803 18:12:06.697494    5774 main.go:141] libmachine: Parsing certificate...
	I0803 18:12:06.698083    5774 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1166/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 18:12:06.859978    5774 main.go:141] libmachine: Creating SSH key...
	I0803 18:12:06.959760    5774 main.go:141] libmachine: Creating Disk image...
	I0803 18:12:06.959767    5774 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 18:12:06.959954    5774 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/kubenet-289000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/kubenet-289000/disk.qcow2
	I0803 18:12:06.969349    5774 main.go:141] libmachine: STDOUT: 
	I0803 18:12:06.969371    5774 main.go:141] libmachine: STDERR: 
	I0803 18:12:06.969425    5774 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/kubenet-289000/disk.qcow2 +20000M
	I0803 18:12:06.977609    5774 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 18:12:06.977624    5774 main.go:141] libmachine: STDERR: 
	I0803 18:12:06.977635    5774 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/kubenet-289000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/kubenet-289000/disk.qcow2
	I0803 18:12:06.977640    5774 main.go:141] libmachine: Starting QEMU VM...
	I0803 18:12:06.977658    5774 qemu.go:418] Using hvf for hardware acceleration
	I0803 18:12:06.977682    5774 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/kubenet-289000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/kubenet-289000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/kubenet-289000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:96:02:a1:63:b2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/kubenet-289000/disk.qcow2
	I0803 18:12:06.979398    5774 main.go:141] libmachine: STDOUT: 
	I0803 18:12:06.979413    5774 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 18:12:06.979427    5774 client.go:171] duration metric: took 282.297ms to LocalClient.Create
	I0803 18:12:08.981597    5774 start.go:128] duration metric: took 2.343451042s to createHost
	I0803 18:12:08.981660    5774 start.go:83] releasing machines lock for "kubenet-289000", held for 2.3439955s
	W0803 18:12:08.981995    5774 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-289000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-289000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 18:12:08.995044    5774 out.go:177] 
	W0803 18:12:08.999106    5774 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 18:12:08.999138    5774 out.go:239] * 
	* 
	W0803 18:12:09.000702    5774 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 18:12:09.011053    5774 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.83s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-003000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-003000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.796226333s)

                                                
                                                
-- stdout --
	* [old-k8s-version-003000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-003000" primary control-plane node in "old-k8s-version-003000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-003000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 18:12:11.420688    5890 out.go:291] Setting OutFile to fd 1 ...
	I0803 18:12:11.420828    5890 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:12:11.420831    5890 out.go:304] Setting ErrFile to fd 2...
	I0803 18:12:11.420834    5890 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:12:11.420976    5890 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1166/.minikube/bin
	I0803 18:12:11.422156    5890 out.go:298] Setting JSON to false
	I0803 18:12:11.438637    5890 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4295,"bootTime":1722729636,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 18:12:11.438713    5890 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 18:12:11.445231    5890 out.go:177] * [old-k8s-version-003000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0803 18:12:11.453126    5890 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 18:12:11.453198    5890 notify.go:220] Checking for updates...
	I0803 18:12:11.460091    5890 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	I0803 18:12:11.463142    5890 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 18:12:11.466104    5890 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 18:12:11.469073    5890 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	I0803 18:12:11.472147    5890 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 18:12:11.475419    5890 config.go:182] Loaded profile config "multinode-483000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 18:12:11.475485    5890 config.go:182] Loaded profile config "stopped-upgrade-413000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0803 18:12:11.475541    5890 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 18:12:11.479068    5890 out.go:177] * Using the qemu2 driver based on user configuration
	I0803 18:12:11.486118    5890 start.go:297] selected driver: qemu2
	I0803 18:12:11.486123    5890 start.go:901] validating driver "qemu2" against <nil>
	I0803 18:12:11.486128    5890 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 18:12:11.488596    5890 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0803 18:12:11.492133    5890 out.go:177] * Automatically selected the socket_vmnet network
	I0803 18:12:11.495144    5890 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0803 18:12:11.495171    5890 cni.go:84] Creating CNI manager for ""
	I0803 18:12:11.495178    5890 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0803 18:12:11.495207    5890 start.go:340] cluster config:
	{Name:old-k8s-version-003000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-003000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 18:12:11.498965    5890 iso.go:125] acquiring lock: {Name:mk5fb439c48468a4ea1254631a9a137d32075713 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 18:12:11.507114    5890 out.go:177] * Starting "old-k8s-version-003000" primary control-plane node in "old-k8s-version-003000" cluster
	I0803 18:12:11.511095    5890 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0803 18:12:11.511110    5890 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0803 18:12:11.511121    5890 cache.go:56] Caching tarball of preloaded images
	I0803 18:12:11.511178    5890 preload.go:172] Found /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0803 18:12:11.511184    5890 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0803 18:12:11.511254    5890 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/old-k8s-version-003000/config.json ...
	I0803 18:12:11.511265    5890 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/old-k8s-version-003000/config.json: {Name:mkb8e6112ea54e9026d2ac5b41ed672cc983da2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 18:12:11.511539    5890 start.go:360] acquireMachinesLock for old-k8s-version-003000: {Name:mk8118ba0191912f8ddd157ebc629e1eb9051406 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 18:12:11.511574    5890 start.go:364] duration metric: took 29.125µs to acquireMachinesLock for "old-k8s-version-003000"
	I0803 18:12:11.511584    5890 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-003000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-003000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 18:12:11.511611    5890 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 18:12:11.519068    5890 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0803 18:12:11.534069    5890 start.go:159] libmachine.API.Create for "old-k8s-version-003000" (driver="qemu2")
	I0803 18:12:11.534096    5890 client.go:168] LocalClient.Create starting
	I0803 18:12:11.534167    5890 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/ca.pem
	I0803 18:12:11.534200    5890 main.go:141] libmachine: Decoding PEM data...
	I0803 18:12:11.534208    5890 main.go:141] libmachine: Parsing certificate...
	I0803 18:12:11.534252    5890 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/cert.pem
	I0803 18:12:11.534275    5890 main.go:141] libmachine: Decoding PEM data...
	I0803 18:12:11.534283    5890 main.go:141] libmachine: Parsing certificate...
	I0803 18:12:11.534625    5890 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1166/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 18:12:11.689088    5890 main.go:141] libmachine: Creating SSH key...
	I0803 18:12:11.792669    5890 main.go:141] libmachine: Creating Disk image...
	I0803 18:12:11.792675    5890 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 18:12:11.792867    5890 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/old-k8s-version-003000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/old-k8s-version-003000/disk.qcow2
	I0803 18:12:11.802089    5890 main.go:141] libmachine: STDOUT: 
	I0803 18:12:11.802110    5890 main.go:141] libmachine: STDERR: 
	I0803 18:12:11.802154    5890 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/old-k8s-version-003000/disk.qcow2 +20000M
	I0803 18:12:11.810134    5890 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 18:12:11.810148    5890 main.go:141] libmachine: STDERR: 
	I0803 18:12:11.810160    5890 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/old-k8s-version-003000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/old-k8s-version-003000/disk.qcow2
	I0803 18:12:11.810164    5890 main.go:141] libmachine: Starting QEMU VM...
	I0803 18:12:11.810178    5890 qemu.go:418] Using hvf for hardware acceleration
	I0803 18:12:11.810210    5890 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/old-k8s-version-003000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/old-k8s-version-003000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/old-k8s-version-003000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:7f:ee:72:43:c2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/old-k8s-version-003000/disk.qcow2
	I0803 18:12:11.811889    5890 main.go:141] libmachine: STDOUT: 
	I0803 18:12:11.811902    5890 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 18:12:11.811921    5890 client.go:171] duration metric: took 277.82875ms to LocalClient.Create
	I0803 18:12:13.814170    5890 start.go:128] duration metric: took 2.302593417s to createHost
	I0803 18:12:13.814259    5890 start.go:83] releasing machines lock for "old-k8s-version-003000", held for 2.302741s
	W0803 18:12:13.814358    5890 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 18:12:13.826899    5890 out.go:177] * Deleting "old-k8s-version-003000" in qemu2 ...
	W0803 18:12:13.858164    5890 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 18:12:13.858194    5890 start.go:729] Will try again in 5 seconds ...
	I0803 18:12:18.860244    5890 start.go:360] acquireMachinesLock for old-k8s-version-003000: {Name:mk8118ba0191912f8ddd157ebc629e1eb9051406 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 18:12:18.860450    5890 start.go:364] duration metric: took 172.75µs to acquireMachinesLock for "old-k8s-version-003000"
	I0803 18:12:18.860501    5890 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-003000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-003000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 18:12:18.860563    5890 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 18:12:18.872876    5890 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0803 18:12:18.893439    5890 start.go:159] libmachine.API.Create for "old-k8s-version-003000" (driver="qemu2")
	I0803 18:12:18.893476    5890 client.go:168] LocalClient.Create starting
	I0803 18:12:18.893544    5890 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/ca.pem
	I0803 18:12:18.893588    5890 main.go:141] libmachine: Decoding PEM data...
	I0803 18:12:18.893598    5890 main.go:141] libmachine: Parsing certificate...
	I0803 18:12:18.893636    5890 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/cert.pem
	I0803 18:12:18.893675    5890 main.go:141] libmachine: Decoding PEM data...
	I0803 18:12:18.893685    5890 main.go:141] libmachine: Parsing certificate...
	I0803 18:12:18.894004    5890 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1166/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 18:12:19.068142    5890 main.go:141] libmachine: Creating SSH key...
	I0803 18:12:19.123317    5890 main.go:141] libmachine: Creating Disk image...
	I0803 18:12:19.123322    5890 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 18:12:19.123513    5890 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/old-k8s-version-003000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/old-k8s-version-003000/disk.qcow2
	I0803 18:12:19.132947    5890 main.go:141] libmachine: STDOUT: 
	I0803 18:12:19.132965    5890 main.go:141] libmachine: STDERR: 
	I0803 18:12:19.133024    5890 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/old-k8s-version-003000/disk.qcow2 +20000M
	I0803 18:12:19.141058    5890 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 18:12:19.141072    5890 main.go:141] libmachine: STDERR: 
	I0803 18:12:19.141084    5890 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/old-k8s-version-003000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/old-k8s-version-003000/disk.qcow2
	I0803 18:12:19.141089    5890 main.go:141] libmachine: Starting QEMU VM...
	I0803 18:12:19.141099    5890 qemu.go:418] Using hvf for hardware acceleration
	I0803 18:12:19.141123    5890 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/old-k8s-version-003000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/old-k8s-version-003000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/old-k8s-version-003000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:c2:e6:94:1d:9e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/old-k8s-version-003000/disk.qcow2
	I0803 18:12:19.142796    5890 main.go:141] libmachine: STDOUT: 
	I0803 18:12:19.142812    5890 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 18:12:19.142832    5890 client.go:171] duration metric: took 249.358583ms to LocalClient.Create
	I0803 18:12:21.145009    5890 start.go:128] duration metric: took 2.284474292s to createHost
	I0803 18:12:21.145139    5890 start.go:83] releasing machines lock for "old-k8s-version-003000", held for 2.2847275s
	W0803 18:12:21.145517    5890 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-003000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-003000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 18:12:21.159038    5890 out.go:177] 
	W0803 18:12:21.163069    5890 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 18:12:21.163146    5890 out.go:239] * 
	* 
	W0803 18:12:21.166209    5890 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 18:12:21.173987    5890 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-003000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-003000 -n old-k8s-version-003000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-003000 -n old-k8s-version-003000: exit status 7 (65.749916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-003000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-003000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-003000 create -f testdata/busybox.yaml: exit status 1 (30.2675ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-003000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-003000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-003000 -n old-k8s-version-003000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-003000 -n old-k8s-version-003000: exit status 7 (28.849417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-003000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-003000 -n old-k8s-version-003000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-003000 -n old-k8s-version-003000: exit status 7 (28.98175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-003000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-003000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-003000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-003000 describe deploy/metrics-server -n kube-system: exit status 1 (27.915875ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-003000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-003000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-003000 -n old-k8s-version-003000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-003000 -n old-k8s-version-003000: exit status 7 (29.352125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-003000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-003000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-003000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.181516833s)

                                                
                                                
-- stdout --
	* [old-k8s-version-003000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-003000" primary control-plane node in "old-k8s-version-003000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-003000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-003000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 18:12:23.666027    5932 out.go:291] Setting OutFile to fd 1 ...
	I0803 18:12:23.666172    5932 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:12:23.666175    5932 out.go:304] Setting ErrFile to fd 2...
	I0803 18:12:23.666177    5932 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:12:23.666320    5932 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1166/.minikube/bin
	I0803 18:12:23.667568    5932 out.go:298] Setting JSON to false
	I0803 18:12:23.684380    5932 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4307,"bootTime":1722729636,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 18:12:23.684476    5932 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 18:12:23.689745    5932 out.go:177] * [old-k8s-version-003000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0803 18:12:23.696685    5932 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 18:12:23.696747    5932 notify.go:220] Checking for updates...
	I0803 18:12:23.704734    5932 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	I0803 18:12:23.707716    5932 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 18:12:23.710627    5932 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 18:12:23.713737    5932 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	I0803 18:12:23.716603    5932 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 18:12:23.719901    5932 config.go:182] Loaded profile config "old-k8s-version-003000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0803 18:12:23.723638    5932 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0803 18:12:23.726635    5932 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 18:12:23.730663    5932 out.go:177] * Using the qemu2 driver based on existing profile
	I0803 18:12:23.736617    5932 start.go:297] selected driver: qemu2
	I0803 18:12:23.736626    5932 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-003000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-003000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 18:12:23.736742    5932 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 18:12:23.739199    5932 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0803 18:12:23.739222    5932 cni.go:84] Creating CNI manager for ""
	I0803 18:12:23.739244    5932 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0803 18:12:23.739266    5932 start.go:340] cluster config:
	{Name:old-k8s-version-003000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-003000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 18:12:23.742856    5932 iso.go:125] acquiring lock: {Name:mk5fb439c48468a4ea1254631a9a137d32075713 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 18:12:23.750689    5932 out.go:177] * Starting "old-k8s-version-003000" primary control-plane node in "old-k8s-version-003000" cluster
	I0803 18:12:23.754639    5932 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0803 18:12:23.754652    5932 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0803 18:12:23.754661    5932 cache.go:56] Caching tarball of preloaded images
	I0803 18:12:23.754716    5932 preload.go:172] Found /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0803 18:12:23.754723    5932 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0803 18:12:23.754772    5932 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/old-k8s-version-003000/config.json ...
	I0803 18:12:23.755130    5932 start.go:360] acquireMachinesLock for old-k8s-version-003000: {Name:mk8118ba0191912f8ddd157ebc629e1eb9051406 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 18:12:23.755157    5932 start.go:364] duration metric: took 20.792µs to acquireMachinesLock for "old-k8s-version-003000"
	I0803 18:12:23.755164    5932 start.go:96] Skipping create...Using existing machine configuration
	I0803 18:12:23.755171    5932 fix.go:54] fixHost starting: 
	I0803 18:12:23.755284    5932 fix.go:112] recreateIfNeeded on old-k8s-version-003000: state=Stopped err=<nil>
	W0803 18:12:23.755292    5932 fix.go:138] unexpected machine state, will restart: <nil>
	I0803 18:12:23.759526    5932 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-003000" ...
	I0803 18:12:23.767790    5932 qemu.go:418] Using hvf for hardware acceleration
	I0803 18:12:23.767830    5932 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/old-k8s-version-003000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/old-k8s-version-003000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/old-k8s-version-003000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:c2:e6:94:1d:9e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/old-k8s-version-003000/disk.qcow2
	I0803 18:12:23.769811    5932 main.go:141] libmachine: STDOUT: 
	I0803 18:12:23.769828    5932 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 18:12:23.769856    5932 fix.go:56] duration metric: took 14.685958ms for fixHost
	I0803 18:12:23.769859    5932 start.go:83] releasing machines lock for "old-k8s-version-003000", held for 14.699333ms
	W0803 18:12:23.769866    5932 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 18:12:23.769893    5932 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 18:12:23.769897    5932 start.go:729] Will try again in 5 seconds ...
	I0803 18:12:28.770497    5932 start.go:360] acquireMachinesLock for old-k8s-version-003000: {Name:mk8118ba0191912f8ddd157ebc629e1eb9051406 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 18:12:28.770786    5932 start.go:364] duration metric: took 219.125µs to acquireMachinesLock for "old-k8s-version-003000"
	I0803 18:12:28.770825    5932 start.go:96] Skipping create...Using existing machine configuration
	I0803 18:12:28.770834    5932 fix.go:54] fixHost starting: 
	I0803 18:12:28.771196    5932 fix.go:112] recreateIfNeeded on old-k8s-version-003000: state=Stopped err=<nil>
	W0803 18:12:28.771210    5932 fix.go:138] unexpected machine state, will restart: <nil>
	I0803 18:12:28.778452    5932 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-003000" ...
	I0803 18:12:28.781528    5932 qemu.go:418] Using hvf for hardware acceleration
	I0803 18:12:28.781642    5932 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/old-k8s-version-003000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/old-k8s-version-003000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/old-k8s-version-003000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:c2:e6:94:1d:9e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/old-k8s-version-003000/disk.qcow2
	I0803 18:12:28.786808    5932 main.go:141] libmachine: STDOUT: 
	I0803 18:12:28.786859    5932 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 18:12:28.786907    5932 fix.go:56] duration metric: took 16.07375ms for fixHost
	I0803 18:12:28.786920    5932 start.go:83] releasing machines lock for "old-k8s-version-003000", held for 16.118709ms
	W0803 18:12:28.787015    5932 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-003000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-003000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 18:12:28.794450    5932 out.go:177] 
	W0803 18:12:28.798526    5932 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 18:12:28.798541    5932 out.go:239] * 
	* 
	W0803 18:12:28.799729    5932 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 18:12:28.810425    5932 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-003000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-003000 -n old-k8s-version-003000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-003000 -n old-k8s-version-003000: exit status 7 (54.202792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-003000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-003000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-003000 -n old-k8s-version-003000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-003000 -n old-k8s-version-003000: exit status 7 (31.650417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-003000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-003000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-003000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-003000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (28.26575ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-003000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-003000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-003000 -n old-k8s-version-003000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-003000 -n old-k8s-version-003000: exit status 7 (31.547833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-003000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-003000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-003000 -n old-k8s-version-003000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-003000 -n old-k8s-version-003000: exit status 7 (28.678875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-003000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-003000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-003000 --alsologtostderr -v=1: exit status 83 (39.428166ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-003000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-003000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 18:12:29.066348    5955 out.go:291] Setting OutFile to fd 1 ...
	I0803 18:12:29.067359    5955 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:12:29.067368    5955 out.go:304] Setting ErrFile to fd 2...
	I0803 18:12:29.067371    5955 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:12:29.067520    5955 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1166/.minikube/bin
	I0803 18:12:29.067739    5955 out.go:298] Setting JSON to false
	I0803 18:12:29.067745    5955 mustload.go:65] Loading cluster: old-k8s-version-003000
	I0803 18:12:29.067927    5955 config.go:182] Loaded profile config "old-k8s-version-003000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0803 18:12:29.072706    5955 out.go:177] * The control-plane node old-k8s-version-003000 host is not running: state=Stopped
	I0803 18:12:29.073808    5955 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-003000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-003000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-003000 -n old-k8s-version-003000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-003000 -n old-k8s-version-003000: exit status 7 (27.875375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-003000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-003000 -n old-k8s-version-003000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-003000 -n old-k8s-version-003000: exit status 7 (28.162583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-003000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (9.81s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-883000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-883000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (9.75489925s)

                                                
                                                
-- stdout --
	* [embed-certs-883000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-883000" primary control-plane node in "embed-certs-883000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-883000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 18:12:29.377264    5972 out.go:291] Setting OutFile to fd 1 ...
	I0803 18:12:29.377402    5972 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:12:29.377405    5972 out.go:304] Setting ErrFile to fd 2...
	I0803 18:12:29.377408    5972 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:12:29.377535    5972 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1166/.minikube/bin
	I0803 18:12:29.378629    5972 out.go:298] Setting JSON to false
	I0803 18:12:29.395228    5972 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4313,"bootTime":1722729636,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 18:12:29.395300    5972 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 18:12:29.398935    5972 out.go:177] * [embed-certs-883000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0803 18:12:29.405837    5972 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 18:12:29.405916    5972 notify.go:220] Checking for updates...
	I0803 18:12:29.411933    5972 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	I0803 18:12:29.414913    5972 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 18:12:29.417867    5972 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 18:12:29.420816    5972 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	I0803 18:12:29.423814    5972 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 18:12:29.427165    5972 config.go:182] Loaded profile config "multinode-483000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 18:12:29.427223    5972 config.go:182] Loaded profile config "stopped-upgrade-413000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0803 18:12:29.427260    5972 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 18:12:29.431822    5972 out.go:177] * Using the qemu2 driver based on user configuration
	I0803 18:12:29.438816    5972 start.go:297] selected driver: qemu2
	I0803 18:12:29.438821    5972 start.go:901] validating driver "qemu2" against <nil>
	I0803 18:12:29.438826    5972 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 18:12:29.440909    5972 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0803 18:12:29.443836    5972 out.go:177] * Automatically selected the socket_vmnet network
	I0803 18:12:29.446865    5972 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0803 18:12:29.446883    5972 cni.go:84] Creating CNI manager for ""
	I0803 18:12:29.446889    5972 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0803 18:12:29.446892    5972 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0803 18:12:29.446923    5972 start.go:340] cluster config:
	{Name:embed-certs-883000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-883000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 18:12:29.450274    5972 iso.go:125] acquiring lock: {Name:mk5fb439c48468a4ea1254631a9a137d32075713 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 18:12:29.453844    5972 out.go:177] * Starting "embed-certs-883000" primary control-plane node in "embed-certs-883000" cluster
	I0803 18:12:29.457714    5972 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0803 18:12:29.457728    5972 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0803 18:12:29.457741    5972 cache.go:56] Caching tarball of preloaded images
	I0803 18:12:29.457796    5972 preload.go:172] Found /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0803 18:12:29.457801    5972 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0803 18:12:29.457879    5972 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/embed-certs-883000/config.json ...
	I0803 18:12:29.457890    5972 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/embed-certs-883000/config.json: {Name:mk94a4eea2799ba0a1536f68f085122298fe69ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 18:12:29.458151    5972 start.go:360] acquireMachinesLock for embed-certs-883000: {Name:mk8118ba0191912f8ddd157ebc629e1eb9051406 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 18:12:29.458181    5972 start.go:364] duration metric: took 24.542µs to acquireMachinesLock for "embed-certs-883000"
	I0803 18:12:29.458190    5972 start.go:93] Provisioning new machine with config: &{Name:embed-certs-883000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.3 ClusterName:embed-certs-883000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 18:12:29.458215    5972 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 18:12:29.464698    5972 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0803 18:12:29.479783    5972 start.go:159] libmachine.API.Create for "embed-certs-883000" (driver="qemu2")
	I0803 18:12:29.479811    5972 client.go:168] LocalClient.Create starting
	I0803 18:12:29.479870    5972 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/ca.pem
	I0803 18:12:29.479904    5972 main.go:141] libmachine: Decoding PEM data...
	I0803 18:12:29.479912    5972 main.go:141] libmachine: Parsing certificate...
	I0803 18:12:29.479948    5972 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/cert.pem
	I0803 18:12:29.479971    5972 main.go:141] libmachine: Decoding PEM data...
	I0803 18:12:29.479980    5972 main.go:141] libmachine: Parsing certificate...
	I0803 18:12:29.480311    5972 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1166/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 18:12:29.633626    5972 main.go:141] libmachine: Creating SSH key...
	I0803 18:12:29.709350    5972 main.go:141] libmachine: Creating Disk image...
	I0803 18:12:29.709357    5972 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 18:12:29.709561    5972 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/embed-certs-883000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/embed-certs-883000/disk.qcow2
	I0803 18:12:29.719265    5972 main.go:141] libmachine: STDOUT: 
	I0803 18:12:29.719281    5972 main.go:141] libmachine: STDERR: 
	I0803 18:12:29.719332    5972 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/embed-certs-883000/disk.qcow2 +20000M
	I0803 18:12:29.727424    5972 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 18:12:29.727438    5972 main.go:141] libmachine: STDERR: 
	I0803 18:12:29.727450    5972 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/embed-certs-883000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/embed-certs-883000/disk.qcow2
	I0803 18:12:29.727456    5972 main.go:141] libmachine: Starting QEMU VM...
	I0803 18:12:29.727471    5972 qemu.go:418] Using hvf for hardware acceleration
	I0803 18:12:29.727500    5972 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/embed-certs-883000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/embed-certs-883000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/embed-certs-883000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:d0:6b:16:f3:c4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/embed-certs-883000/disk.qcow2
	I0803 18:12:29.729208    5972 main.go:141] libmachine: STDOUT: 
	I0803 18:12:29.729223    5972 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 18:12:29.729240    5972 client.go:171] duration metric: took 249.432375ms to LocalClient.Create
	I0803 18:12:31.731365    5972 start.go:128] duration metric: took 2.273182333s to createHost
	I0803 18:12:31.731406    5972 start.go:83] releasing machines lock for "embed-certs-883000", held for 2.273285625s
	W0803 18:12:31.731441    5972 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 18:12:31.748285    5972 out.go:177] * Deleting "embed-certs-883000" in qemu2 ...
	W0803 18:12:31.765066    5972 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 18:12:31.765079    5972 start.go:729] Will try again in 5 seconds ...
	I0803 18:12:36.767147    5972 start.go:360] acquireMachinesLock for embed-certs-883000: {Name:mk8118ba0191912f8ddd157ebc629e1eb9051406 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 18:12:36.767726    5972 start.go:364] duration metric: took 484.875µs to acquireMachinesLock for "embed-certs-883000"
	I0803 18:12:36.767884    5972 start.go:93] Provisioning new machine with config: &{Name:embed-certs-883000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.3 ClusterName:embed-certs-883000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 18:12:36.768191    5972 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 18:12:36.776774    5972 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0803 18:12:36.826418    5972 start.go:159] libmachine.API.Create for "embed-certs-883000" (driver="qemu2")
	I0803 18:12:36.826483    5972 client.go:168] LocalClient.Create starting
	I0803 18:12:36.826632    5972 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/ca.pem
	I0803 18:12:36.826697    5972 main.go:141] libmachine: Decoding PEM data...
	I0803 18:12:36.826717    5972 main.go:141] libmachine: Parsing certificate...
	I0803 18:12:36.826786    5972 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/cert.pem
	I0803 18:12:36.826831    5972 main.go:141] libmachine: Decoding PEM data...
	I0803 18:12:36.826844    5972 main.go:141] libmachine: Parsing certificate...
	I0803 18:12:36.827374    5972 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1166/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 18:12:36.993903    5972 main.go:141] libmachine: Creating SSH key...
	I0803 18:12:37.041613    5972 main.go:141] libmachine: Creating Disk image...
	I0803 18:12:37.041623    5972 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 18:12:37.041835    5972 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/embed-certs-883000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/embed-certs-883000/disk.qcow2
	I0803 18:12:37.051096    5972 main.go:141] libmachine: STDOUT: 
	I0803 18:12:37.051129    5972 main.go:141] libmachine: STDERR: 
	I0803 18:12:37.051175    5972 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/embed-certs-883000/disk.qcow2 +20000M
	I0803 18:12:37.059399    5972 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 18:12:37.059420    5972 main.go:141] libmachine: STDERR: 
	I0803 18:12:37.059432    5972 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/embed-certs-883000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/embed-certs-883000/disk.qcow2
	I0803 18:12:37.059438    5972 main.go:141] libmachine: Starting QEMU VM...
	I0803 18:12:37.059448    5972 qemu.go:418] Using hvf for hardware acceleration
	I0803 18:12:37.059476    5972 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/embed-certs-883000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/embed-certs-883000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/embed-certs-883000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:ef:b1:0e:c0:c9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/embed-certs-883000/disk.qcow2
	I0803 18:12:37.061146    5972 main.go:141] libmachine: STDOUT: 
	I0803 18:12:37.061167    5972 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 18:12:37.061179    5972 client.go:171] duration metric: took 234.697042ms to LocalClient.Create
	I0803 18:12:39.062718    5972 start.go:128] duration metric: took 2.294568416s to createHost
	I0803 18:12:39.062752    5972 start.go:83] releasing machines lock for "embed-certs-883000", held for 2.295066083s
	W0803 18:12:39.062979    5972 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-883000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-883000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 18:12:39.072416    5972 out.go:177] 
	W0803 18:12:39.077445    5972 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 18:12:39.077461    5972 out.go:239] * 
	* 
	W0803 18:12:39.078963    5972 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 18:12:39.089305    5972 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-883000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-883000 -n embed-certs-883000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-883000 -n embed-certs-883000: exit status 7 (57.649542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-883000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (9.81s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-883000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-883000 create -f testdata/busybox.yaml: exit status 1 (28.838292ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-883000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-883000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-883000 -n embed-certs-883000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-883000 -n embed-certs-883000: exit status 7 (29.565625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-883000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-883000 -n embed-certs-883000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-883000 -n embed-certs-883000: exit status 7 (28.571375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-883000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-883000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-883000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-883000 describe deploy/metrics-server -n kube-system: exit status 1 (27.157583ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-883000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-883000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-883000 -n embed-certs-883000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-883000 -n embed-certs-883000: exit status 7 (29.097375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-883000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (10.74s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-214000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-rc.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-214000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-rc.0: exit status 80 (10.673902709s)

                                                
                                                
-- stdout --
	* [no-preload-214000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-214000" primary control-plane node in "no-preload-214000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-214000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 18:12:43.030758    6036 out.go:291] Setting OutFile to fd 1 ...
	I0803 18:12:43.030894    6036 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:12:43.030898    6036 out.go:304] Setting ErrFile to fd 2...
	I0803 18:12:43.030901    6036 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:12:43.031032    6036 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1166/.minikube/bin
	I0803 18:12:43.032275    6036 out.go:298] Setting JSON to false
	I0803 18:12:43.049898    6036 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4327,"bootTime":1722729636,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 18:12:43.049963    6036 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 18:12:43.054365    6036 out.go:177] * [no-preload-214000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0803 18:12:43.061352    6036 notify.go:220] Checking for updates...
	I0803 18:12:43.067258    6036 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 18:12:43.074283    6036 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	I0803 18:12:43.080270    6036 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 18:12:43.086211    6036 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 18:12:43.092233    6036 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	I0803 18:12:43.102104    6036 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 18:12:43.105542    6036 config.go:182] Loaded profile config "embed-certs-883000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 18:12:43.105611    6036 config.go:182] Loaded profile config "multinode-483000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 18:12:43.105668    6036 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 18:12:43.112109    6036 out.go:177] * Using the qemu2 driver based on user configuration
	I0803 18:12:43.124271    6036 start.go:297] selected driver: qemu2
	I0803 18:12:43.124276    6036 start.go:901] validating driver "qemu2" against <nil>
	I0803 18:12:43.124283    6036 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 18:12:43.126945    6036 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0803 18:12:43.132238    6036 out.go:177] * Automatically selected the socket_vmnet network
	I0803 18:12:43.136422    6036 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0803 18:12:43.136468    6036 cni.go:84] Creating CNI manager for ""
	I0803 18:12:43.136476    6036 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0803 18:12:43.136483    6036 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0803 18:12:43.136538    6036 start.go:340] cluster config:
	{Name:no-preload-214000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:no-preload-214000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/
bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 18:12:43.140179    6036 iso.go:125] acquiring lock: {Name:mk5fb439c48468a4ea1254631a9a137d32075713 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 18:12:43.151297    6036 out.go:177] * Starting "no-preload-214000" primary control-plane node in "no-preload-214000" cluster
	I0803 18:12:43.158308    6036 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0803 18:12:43.158394    6036 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/no-preload-214000/config.json ...
	I0803 18:12:43.158415    6036 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/no-preload-214000/config.json: {Name:mkdfe14ec6927860017bd0f5f3277182edc38dd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 18:12:43.158496    6036 cache.go:107] acquiring lock: {Name:mk454d502bb00fe9f5578b8ccf966bf1c1c667d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 18:12:43.158528    6036 cache.go:107] acquiring lock: {Name:mk34dca4c7d77ca76387dabd5770fb343b4e6856 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 18:12:43.158542    6036 cache.go:107] acquiring lock: {Name:mk8c49cdf0462d680a879c0e49b03aef8cb3564a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 18:12:43.158537    6036 cache.go:107] acquiring lock: {Name:mk2ed155e288d66442809ec056c78b33f2f08be3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 18:12:43.158569    6036 cache.go:107] acquiring lock: {Name:mk2eac339b3624b0f233ae60b21bf297703b6ebb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 18:12:43.158594    6036 cache.go:115] /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0803 18:12:43.158606    6036 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 112.75µs
	I0803 18:12:43.158630    6036 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0803 18:12:43.158593    6036 cache.go:107] acquiring lock: {Name:mk028aa6e5f3b289f6375fe482f01282f7945bcb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 18:12:43.158662    6036 cache.go:107] acquiring lock: {Name:mk6040dbaeea26454c7414a508e6564e1cd107e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 18:12:43.158763    6036 cache.go:107] acquiring lock: {Name:mkea611a55fe4d417cbc2a53aebd674cb2cd474e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 18:12:43.162778    6036 start.go:360] acquireMachinesLock for no-preload-214000: {Name:mk8118ba0191912f8ddd157ebc629e1eb9051406 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 18:12:43.162832    6036 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0803 18:12:43.162832    6036 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0803 18:12:43.162845    6036 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.7-0
	I0803 18:12:43.162869    6036 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0803 18:12:43.162851    6036 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0803 18:12:43.162881    6036 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I0803 18:12:43.166172    6036 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0803 18:12:43.173760    6036 start.go:364] duration metric: took 10.968833ms to acquireMachinesLock for "no-preload-214000"
	I0803 18:12:43.173787    6036 start.go:93] Provisioning new machine with config: &{Name:no-preload-214000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-rc.0 ClusterName:no-preload-214000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 18:12:43.173836    6036 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 18:12:43.181790    6036 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0803 18:12:43.181823    6036 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0803 18:12:43.181793    6036 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0803 18:12:43.184315    6036 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0803 18:12:43.184484    6036 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.7-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.7-0
	I0803 18:12:43.184493    6036 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0803 18:12:43.184698    6036 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0803 18:12:43.184687    6036 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I0803 18:12:43.203690    6036 start.go:159] libmachine.API.Create for "no-preload-214000" (driver="qemu2")
	I0803 18:12:43.203731    6036 client.go:168] LocalClient.Create starting
	I0803 18:12:43.203810    6036 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/ca.pem
	I0803 18:12:43.203841    6036 main.go:141] libmachine: Decoding PEM data...
	I0803 18:12:43.203852    6036 main.go:141] libmachine: Parsing certificate...
	I0803 18:12:43.203895    6036 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/cert.pem
	I0803 18:12:43.203920    6036 main.go:141] libmachine: Decoding PEM data...
	I0803 18:12:43.203932    6036 main.go:141] libmachine: Parsing certificate...
	I0803 18:12:43.204355    6036 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1166/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 18:12:43.362984    6036 main.go:141] libmachine: Creating SSH key...
	I0803 18:12:43.450216    6036 main.go:141] libmachine: Creating Disk image...
	I0803 18:12:43.450282    6036 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 18:12:43.450466    6036 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/no-preload-214000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/no-preload-214000/disk.qcow2
	I0803 18:12:43.459791    6036 main.go:141] libmachine: STDOUT: 
	I0803 18:12:43.459822    6036 main.go:141] libmachine: STDERR: 
	I0803 18:12:43.459876    6036 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/no-preload-214000/disk.qcow2 +20000M
	I0803 18:12:43.469161    6036 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 18:12:43.469185    6036 main.go:141] libmachine: STDERR: 
	I0803 18:12:43.469198    6036 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/no-preload-214000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/no-preload-214000/disk.qcow2
	I0803 18:12:43.469204    6036 main.go:141] libmachine: Starting QEMU VM...
	I0803 18:12:43.469221    6036 qemu.go:418] Using hvf for hardware acceleration
	I0803 18:12:43.469247    6036 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/no-preload-214000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/no-preload-214000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/no-preload-214000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:06:84:4f:39:50 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/no-preload-214000/disk.qcow2
	I0803 18:12:43.471048    6036 main.go:141] libmachine: STDOUT: 
	I0803 18:12:43.471063    6036 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 18:12:43.471082    6036 client.go:171] duration metric: took 267.354291ms to LocalClient.Create
	I0803 18:12:43.606276    6036 cache.go:162] opening:  /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0
	I0803 18:12:43.606290    6036 cache.go:162] opening:  /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0
	I0803 18:12:43.612888    6036 cache.go:162] opening:  /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0
	I0803 18:12:43.623033    6036 cache.go:162] opening:  /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9
	I0803 18:12:43.659637    6036 cache.go:162] opening:  /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.7-0
	I0803 18:12:43.695779    6036 cache.go:162] opening:  /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1
	I0803 18:12:43.731383    6036 cache.go:162] opening:  /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-rc.0
	I0803 18:12:43.789034    6036 cache.go:157] /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0803 18:12:43.789097    6036 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 630.539417ms
	I0803 18:12:43.789129    6036 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0803 18:12:45.471262    6036 start.go:128] duration metric: took 2.297447458s to createHost
	I0803 18:12:45.471332    6036 start.go:83] releasing machines lock for "no-preload-214000", held for 2.29762575s
	W0803 18:12:45.471399    6036 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 18:12:45.483437    6036 out.go:177] * Deleting "no-preload-214000" in qemu2 ...
	W0803 18:12:45.512189    6036 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 18:12:45.512219    6036 start.go:729] Will try again in 5 seconds ...
	I0803 18:12:45.623701    6036 cache.go:157] /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 exists
	I0803 18:12:45.623750    6036 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.10.1" -> "/Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1" took 2.465230084s
	I0803 18:12:45.623798    6036 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.10.1 -> /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 succeeded
	I0803 18:12:47.010286    6036 cache.go:157] /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0 exists
	I0803 18:12:47.010339    6036 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0-rc.0" -> "/Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0" took 3.851812042s
	I0803 18:12:47.010363    6036 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0-rc.0 -> /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0 succeeded
	I0803 18:12:47.029638    6036 cache.go:157] /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-rc.0 exists
	I0803 18:12:47.029700    6036 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0-rc.0" -> "/Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-rc.0" took 3.871312125s
	I0803 18:12:47.029722    6036 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0-rc.0 -> /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-rc.0 succeeded
	I0803 18:12:47.396482    6036 cache.go:157] /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0 exists
	I0803 18:12:47.396533    6036 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0-rc.0" -> "/Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0" took 4.2381575s
	I0803 18:12:47.396564    6036 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0-rc.0 -> /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0 succeeded
	I0803 18:12:47.710152    6036 cache.go:157] /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0 exists
	I0803 18:12:47.710215    6036 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0-rc.0" -> "/Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0" took 4.551689375s
	I0803 18:12:47.710244    6036 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0-rc.0 -> /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0 succeeded
	I0803 18:12:50.512217    6036 start.go:360] acquireMachinesLock for no-preload-214000: {Name:mk8118ba0191912f8ddd157ebc629e1eb9051406 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 18:12:51.278907    6036 start.go:364] duration metric: took 766.628375ms to acquireMachinesLock for "no-preload-214000"
	I0803 18:12:51.279071    6036 start.go:93] Provisioning new machine with config: &{Name:no-preload-214000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-rc.0 ClusterName:no-preload-214000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 18:12:51.279324    6036 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 18:12:51.292979    6036 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0803 18:12:51.343737    6036 start.go:159] libmachine.API.Create for "no-preload-214000" (driver="qemu2")
	I0803 18:12:51.343785    6036 client.go:168] LocalClient.Create starting
	I0803 18:12:51.343925    6036 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/ca.pem
	I0803 18:12:51.344009    6036 main.go:141] libmachine: Decoding PEM data...
	I0803 18:12:51.344029    6036 main.go:141] libmachine: Parsing certificate...
	I0803 18:12:51.344099    6036 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/cert.pem
	I0803 18:12:51.344144    6036 main.go:141] libmachine: Decoding PEM data...
	I0803 18:12:51.344162    6036 main.go:141] libmachine: Parsing certificate...
	I0803 18:12:51.344641    6036 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1166/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 18:12:51.351835    6036 cache.go:157] /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.7-0 exists
	I0803 18:12:51.351863    6036 cache.go:96] cache image "registry.k8s.io/etcd:3.5.7-0" -> "/Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.7-0" took 8.193581584s
	I0803 18:12:51.351876    6036 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.7-0 -> /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.7-0 succeeded
	I0803 18:12:51.351905    6036 cache.go:87] Successfully saved all images to host disk.
	I0803 18:12:51.511249    6036 main.go:141] libmachine: Creating SSH key...
	I0803 18:12:51.599059    6036 main.go:141] libmachine: Creating Disk image...
	I0803 18:12:51.599064    6036 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 18:12:51.599272    6036 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/no-preload-214000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/no-preload-214000/disk.qcow2
	I0803 18:12:51.608807    6036 main.go:141] libmachine: STDOUT: 
	I0803 18:12:51.608824    6036 main.go:141] libmachine: STDERR: 
	I0803 18:12:51.608876    6036 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/no-preload-214000/disk.qcow2 +20000M
	I0803 18:12:51.616940    6036 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 18:12:51.616953    6036 main.go:141] libmachine: STDERR: 
	I0803 18:12:51.616969    6036 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/no-preload-214000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/no-preload-214000/disk.qcow2
	I0803 18:12:51.616978    6036 main.go:141] libmachine: Starting QEMU VM...
	I0803 18:12:51.616990    6036 qemu.go:418] Using hvf for hardware acceleration
	I0803 18:12:51.617033    6036 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/no-preload-214000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/no-preload-214000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/no-preload-214000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:e8:43:13:35:61 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/no-preload-214000/disk.qcow2
	I0803 18:12:51.618783    6036 main.go:141] libmachine: STDOUT: 
	I0803 18:12:51.618810    6036 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 18:12:51.618824    6036 client.go:171] duration metric: took 275.042333ms to LocalClient.Create
	I0803 18:12:53.620960    6036 start.go:128] duration metric: took 2.341674208s to createHost
	I0803 18:12:53.621072    6036 start.go:83] releasing machines lock for "no-preload-214000", held for 2.342197042s
	W0803 18:12:53.621444    6036 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-214000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-214000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 18:12:53.636962    6036 out.go:177] 
	W0803 18:12:53.646110    6036 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 18:12:53.646138    6036 out.go:239] * 
	* 
	W0803 18:12:53.648824    6036 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 18:12:53.658934    6036 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-214000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-rc.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-214000 -n no-preload-214000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-214000 -n no-preload-214000: exit status 7 (64.25475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-214000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (10.74s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-883000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-883000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (5.233660541s)

                                                
                                                
-- stdout --
	* [embed-certs-883000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-883000" primary control-plane node in "embed-certs-883000" cluster
	* Restarting existing qemu2 VM for "embed-certs-883000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-883000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 18:12:43.035214    6037 out.go:291] Setting OutFile to fd 1 ...
	I0803 18:12:43.035331    6037 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:12:43.035335    6037 out.go:304] Setting ErrFile to fd 2...
	I0803 18:12:43.035337    6037 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:12:43.035489    6037 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1166/.minikube/bin
	I0803 18:12:43.036455    6037 out.go:298] Setting JSON to false
	I0803 18:12:43.052965    6037 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4327,"bootTime":1722729636,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 18:12:43.053030    6037 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 18:12:43.061281    6037 out.go:177] * [embed-certs-883000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0803 18:12:43.067347    6037 notify.go:220] Checking for updates...
	I0803 18:12:43.071289    6037 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 18:12:43.077244    6037 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	I0803 18:12:43.083272    6037 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 18:12:43.089284    6037 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 18:12:43.095220    6037 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	I0803 18:12:43.105213    6037 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 18:12:43.108553    6037 config.go:182] Loaded profile config "embed-certs-883000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 18:12:43.108841    6037 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 18:12:43.124261    6037 out.go:177] * Using the qemu2 driver based on existing profile
	I0803 18:12:43.128234    6037 start.go:297] selected driver: qemu2
	I0803 18:12:43.128240    6037 start.go:901] validating driver "qemu2" against &{Name:embed-certs-883000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:embed-certs-883000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 18:12:43.128300    6037 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 18:12:43.130547    6037 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0803 18:12:43.130570    6037 cni.go:84] Creating CNI manager for ""
	I0803 18:12:43.130578    6037 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0803 18:12:43.130611    6037 start.go:340] cluster config:
	{Name:embed-certs-883000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-883000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 18:12:43.134436    6037 iso.go:125] acquiring lock: {Name:mk5fb439c48468a4ea1254631a9a137d32075713 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 18:12:43.144270    6037 out.go:177] * Starting "embed-certs-883000" primary control-plane node in "embed-certs-883000" cluster
	I0803 18:12:43.155288    6037 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0803 18:12:43.155311    6037 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0803 18:12:43.155332    6037 cache.go:56] Caching tarball of preloaded images
	I0803 18:12:43.155422    6037 preload.go:172] Found /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0803 18:12:43.155429    6037 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0803 18:12:43.155508    6037 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/embed-certs-883000/config.json ...
	I0803 18:12:43.155922    6037 start.go:360] acquireMachinesLock for embed-certs-883000: {Name:mk8118ba0191912f8ddd157ebc629e1eb9051406 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 18:12:43.155968    6037 start.go:364] duration metric: took 39µs to acquireMachinesLock for "embed-certs-883000"
	I0803 18:12:43.155978    6037 start.go:96] Skipping create...Using existing machine configuration
	I0803 18:12:43.155985    6037 fix.go:54] fixHost starting: 
	I0803 18:12:43.156124    6037 fix.go:112] recreateIfNeeded on embed-certs-883000: state=Stopped err=<nil>
	W0803 18:12:43.156135    6037 fix.go:138] unexpected machine state, will restart: <nil>
	I0803 18:12:43.166123    6037 out.go:177] * Restarting existing qemu2 VM for "embed-certs-883000" ...
	I0803 18:12:43.171289    6037 qemu.go:418] Using hvf for hardware acceleration
	I0803 18:12:43.171346    6037 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/embed-certs-883000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/embed-certs-883000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/embed-certs-883000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:ef:b1:0e:c0:c9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/embed-certs-883000/disk.qcow2
	I0803 18:12:43.173684    6037 main.go:141] libmachine: STDOUT: 
	I0803 18:12:43.173706    6037 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 18:12:43.173733    6037 fix.go:56] duration metric: took 17.749208ms for fixHost
	I0803 18:12:43.173740    6037 start.go:83] releasing machines lock for "embed-certs-883000", held for 17.766584ms
	W0803 18:12:43.173747    6037 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 18:12:43.173781    6037 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 18:12:43.173785    6037 start.go:729] Will try again in 5 seconds ...
	I0803 18:12:48.174063    6037 start.go:360] acquireMachinesLock for embed-certs-883000: {Name:mk8118ba0191912f8ddd157ebc629e1eb9051406 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 18:12:48.174635    6037 start.go:364] duration metric: took 414.75µs to acquireMachinesLock for "embed-certs-883000"
	I0803 18:12:48.174757    6037 start.go:96] Skipping create...Using existing machine configuration
	I0803 18:12:48.174780    6037 fix.go:54] fixHost starting: 
	I0803 18:12:48.175502    6037 fix.go:112] recreateIfNeeded on embed-certs-883000: state=Stopped err=<nil>
	W0803 18:12:48.175531    6037 fix.go:138] unexpected machine state, will restart: <nil>
	I0803 18:12:48.180980    6037 out.go:177] * Restarting existing qemu2 VM for "embed-certs-883000" ...
	I0803 18:12:48.192050    6037 qemu.go:418] Using hvf for hardware acceleration
	I0803 18:12:48.192270    6037 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/embed-certs-883000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/embed-certs-883000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/embed-certs-883000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:ef:b1:0e:c0:c9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/embed-certs-883000/disk.qcow2
	I0803 18:12:48.202323    6037 main.go:141] libmachine: STDOUT: 
	I0803 18:12:48.202381    6037 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 18:12:48.202467    6037 fix.go:56] duration metric: took 27.691ms for fixHost
	I0803 18:12:48.202489    6037 start.go:83] releasing machines lock for "embed-certs-883000", held for 27.832ms
	W0803 18:12:48.202684    6037 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-883000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-883000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 18:12:48.210987    6037 out.go:177] 
	W0803 18:12:48.214878    6037 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 18:12:48.214912    6037 out.go:239] * 
	* 
	W0803 18:12:48.217483    6037 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 18:12:48.231000    6037 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-883000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-883000 -n embed-certs-883000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-883000 -n embed-certs-883000: exit status 7 (65.318875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-883000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-883000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-883000 -n embed-certs-883000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-883000 -n embed-certs-883000: exit status 7 (32.042042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-883000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-883000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-883000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-883000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.933833ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-883000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-883000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-883000 -n embed-certs-883000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-883000 -n embed-certs-883000: exit status 7 (28.356791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-883000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-883000 image list --format=json
start_stop_delete_test.go:304: v1.30.3 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.3",
- 	"registry.k8s.io/kube-controller-manager:v1.30.3",
- 	"registry.k8s.io/kube-proxy:v1.30.3",
- 	"registry.k8s.io/kube-scheduler:v1.30.3",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-883000 -n embed-certs-883000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-883000 -n embed-certs-883000: exit status 7 (27.796833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-883000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-883000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-883000 --alsologtostderr -v=1: exit status 83 (41.643708ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-883000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-883000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 18:12:48.487883    6096 out.go:291] Setting OutFile to fd 1 ...
	I0803 18:12:48.488036    6096 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:12:48.488039    6096 out.go:304] Setting ErrFile to fd 2...
	I0803 18:12:48.488041    6096 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:12:48.488170    6096 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1166/.minikube/bin
	I0803 18:12:48.488394    6096 out.go:298] Setting JSON to false
	I0803 18:12:48.488400    6096 mustload.go:65] Loading cluster: embed-certs-883000
	I0803 18:12:48.488578    6096 config.go:182] Loaded profile config "embed-certs-883000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 18:12:48.491521    6096 out.go:177] * The control-plane node embed-certs-883000 host is not running: state=Stopped
	I0803 18:12:48.499594    6096 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-883000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-883000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-883000 -n embed-certs-883000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-883000 -n embed-certs-883000: exit status 7 (28.889666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-883000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-883000 -n embed-certs-883000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-883000 -n embed-certs-883000: exit status 7 (29.039291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-883000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.99s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-432000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-432000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (9.92299575s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-432000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-432000" primary control-plane node in "default-k8s-diff-port-432000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-432000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 18:12:48.791012    6113 out.go:291] Setting OutFile to fd 1 ...
	I0803 18:12:48.791127    6113 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:12:48.791131    6113 out.go:304] Setting ErrFile to fd 2...
	I0803 18:12:48.791133    6113 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:12:48.791278    6113 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1166/.minikube/bin
	I0803 18:12:48.792304    6113 out.go:298] Setting JSON to false
	I0803 18:12:48.808354    6113 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4332,"bootTime":1722729636,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 18:12:48.808425    6113 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 18:12:48.813637    6113 out.go:177] * [default-k8s-diff-port-432000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0803 18:12:48.822123    6113 notify.go:220] Checking for updates...
	I0803 18:12:48.825506    6113 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 18:12:48.829562    6113 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	I0803 18:12:48.835504    6113 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 18:12:48.843568    6113 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 18:12:48.847544    6113 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	I0803 18:12:48.854568    6113 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 18:12:48.858906    6113 config.go:182] Loaded profile config "multinode-483000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 18:12:48.858966    6113 config.go:182] Loaded profile config "no-preload-214000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-rc.0
	I0803 18:12:48.859021    6113 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 18:12:48.863543    6113 out.go:177] * Using the qemu2 driver based on user configuration
	I0803 18:12:48.870526    6113 start.go:297] selected driver: qemu2
	I0803 18:12:48.870532    6113 start.go:901] validating driver "qemu2" against <nil>
	I0803 18:12:48.870536    6113 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 18:12:48.872694    6113 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0803 18:12:48.876538    6113 out.go:177] * Automatically selected the socket_vmnet network
	I0803 18:12:48.880633    6113 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0803 18:12:48.880672    6113 cni.go:84] Creating CNI manager for ""
	I0803 18:12:48.880680    6113 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0803 18:12:48.880684    6113 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0803 18:12:48.880704    6113 start.go:340] cluster config:
	{Name:default-k8s-diff-port-432000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-432000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 18:12:48.884203    6113 iso.go:125] acquiring lock: {Name:mk5fb439c48468a4ea1254631a9a137d32075713 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 18:12:48.892553    6113 out.go:177] * Starting "default-k8s-diff-port-432000" primary control-plane node in "default-k8s-diff-port-432000" cluster
	I0803 18:12:48.896631    6113 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0803 18:12:48.896646    6113 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0803 18:12:48.896658    6113 cache.go:56] Caching tarball of preloaded images
	I0803 18:12:48.896714    6113 preload.go:172] Found /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0803 18:12:48.896719    6113 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0803 18:12:48.896779    6113 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/default-k8s-diff-port-432000/config.json ...
	I0803 18:12:48.896792    6113 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/default-k8s-diff-port-432000/config.json: {Name:mka3be4a958a2c246f442857b2890dab1124a6f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 18:12:48.897046    6113 start.go:360] acquireMachinesLock for default-k8s-diff-port-432000: {Name:mk8118ba0191912f8ddd157ebc629e1eb9051406 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 18:12:48.897081    6113 start.go:364] duration metric: took 28.833µs to acquireMachinesLock for "default-k8s-diff-port-432000"
	I0803 18:12:48.897091    6113 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-432000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-432000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 18:12:48.897133    6113 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 18:12:48.905547    6113 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0803 18:12:48.922002    6113 start.go:159] libmachine.API.Create for "default-k8s-diff-port-432000" (driver="qemu2")
	I0803 18:12:48.922022    6113 client.go:168] LocalClient.Create starting
	I0803 18:12:48.922081    6113 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/ca.pem
	I0803 18:12:48.922114    6113 main.go:141] libmachine: Decoding PEM data...
	I0803 18:12:48.922121    6113 main.go:141] libmachine: Parsing certificate...
	I0803 18:12:48.922172    6113 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/cert.pem
	I0803 18:12:48.922198    6113 main.go:141] libmachine: Decoding PEM data...
	I0803 18:12:48.922204    6113 main.go:141] libmachine: Parsing certificate...
	I0803 18:12:48.922543    6113 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1166/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 18:12:49.097141    6113 main.go:141] libmachine: Creating SSH key...
	I0803 18:12:49.256865    6113 main.go:141] libmachine: Creating Disk image...
	I0803 18:12:49.256872    6113 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 18:12:49.257071    6113 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/default-k8s-diff-port-432000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/default-k8s-diff-port-432000/disk.qcow2
	I0803 18:12:49.266705    6113 main.go:141] libmachine: STDOUT: 
	I0803 18:12:49.266723    6113 main.go:141] libmachine: STDERR: 
	I0803 18:12:49.266775    6113 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/default-k8s-diff-port-432000/disk.qcow2 +20000M
	I0803 18:12:49.274808    6113 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 18:12:49.274827    6113 main.go:141] libmachine: STDERR: 
	I0803 18:12:49.274839    6113 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/default-k8s-diff-port-432000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/default-k8s-diff-port-432000/disk.qcow2
	I0803 18:12:49.274843    6113 main.go:141] libmachine: Starting QEMU VM...
	I0803 18:12:49.274855    6113 qemu.go:418] Using hvf for hardware acceleration
	I0803 18:12:49.274882    6113 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/default-k8s-diff-port-432000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/default-k8s-diff-port-432000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/default-k8s-diff-port-432000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:e8:21:46:cf:24 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/default-k8s-diff-port-432000/disk.qcow2
	I0803 18:12:49.276589    6113 main.go:141] libmachine: STDOUT: 
	I0803 18:12:49.276605    6113 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 18:12:49.276626    6113 client.go:171] duration metric: took 354.608459ms to LocalClient.Create
	I0803 18:12:51.278733    6113 start.go:128] duration metric: took 2.381650041s to createHost
	I0803 18:12:51.278797    6113 start.go:83] releasing machines lock for "default-k8s-diff-port-432000", held for 2.381773958s
	W0803 18:12:51.278864    6113 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 18:12:51.301945    6113 out.go:177] * Deleting "default-k8s-diff-port-432000" in qemu2 ...
	W0803 18:12:51.324353    6113 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 18:12:51.324378    6113 start.go:729] Will try again in 5 seconds ...
	I0803 18:12:56.326469    6113 start.go:360] acquireMachinesLock for default-k8s-diff-port-432000: {Name:mk8118ba0191912f8ddd157ebc629e1eb9051406 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 18:12:56.326819    6113 start.go:364] duration metric: took 264.333µs to acquireMachinesLock for "default-k8s-diff-port-432000"
	I0803 18:12:56.326942    6113 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-432000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-432000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 18:12:56.327220    6113 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 18:12:56.336546    6113 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0803 18:12:56.387012    6113 start.go:159] libmachine.API.Create for "default-k8s-diff-port-432000" (driver="qemu2")
	I0803 18:12:56.387063    6113 client.go:168] LocalClient.Create starting
	I0803 18:12:56.387158    6113 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/ca.pem
	I0803 18:12:56.387211    6113 main.go:141] libmachine: Decoding PEM data...
	I0803 18:12:56.387227    6113 main.go:141] libmachine: Parsing certificate...
	I0803 18:12:56.387290    6113 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/cert.pem
	I0803 18:12:56.387320    6113 main.go:141] libmachine: Decoding PEM data...
	I0803 18:12:56.387335    6113 main.go:141] libmachine: Parsing certificate...
	I0803 18:12:56.387943    6113 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1166/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 18:12:56.554142    6113 main.go:141] libmachine: Creating SSH key...
	I0803 18:12:56.622020    6113 main.go:141] libmachine: Creating Disk image...
	I0803 18:12:56.622029    6113 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 18:12:56.622208    6113 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/default-k8s-diff-port-432000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/default-k8s-diff-port-432000/disk.qcow2
	I0803 18:12:56.631650    6113 main.go:141] libmachine: STDOUT: 
	I0803 18:12:56.631675    6113 main.go:141] libmachine: STDERR: 
	I0803 18:12:56.631722    6113 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/default-k8s-diff-port-432000/disk.qcow2 +20000M
	I0803 18:12:56.639679    6113 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 18:12:56.639694    6113 main.go:141] libmachine: STDERR: 
	I0803 18:12:56.639703    6113 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/default-k8s-diff-port-432000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/default-k8s-diff-port-432000/disk.qcow2
	I0803 18:12:56.639708    6113 main.go:141] libmachine: Starting QEMU VM...
	I0803 18:12:56.639722    6113 qemu.go:418] Using hvf for hardware acceleration
	I0803 18:12:56.639750    6113 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/default-k8s-diff-port-432000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/default-k8s-diff-port-432000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/default-k8s-diff-port-432000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:40:78:80:f4:2e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/default-k8s-diff-port-432000/disk.qcow2
	I0803 18:12:56.641322    6113 main.go:141] libmachine: STDOUT: 
	I0803 18:12:56.641338    6113 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 18:12:56.641349    6113 client.go:171] duration metric: took 254.289542ms to LocalClient.Create
	I0803 18:12:58.643467    6113 start.go:128] duration metric: took 2.316290417s to createHost
	I0803 18:12:58.643573    6113 start.go:83] releasing machines lock for "default-k8s-diff-port-432000", held for 2.316764541s
	W0803 18:12:58.643978    6113 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-432000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-432000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 18:12:58.656562    6113 out.go:177] 
	W0803 18:12:58.660641    6113 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 18:12:58.660728    6113 out.go:239] * 
	* 
	W0803 18:12:58.663605    6113 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 18:12:58.671570    6113 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-432000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-432000 -n default-k8s-diff-port-432000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-432000 -n default-k8s-diff-port-432000: exit status 7 (64.55925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-432000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-214000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-214000 create -f testdata/busybox.yaml: exit status 1 (29.56725ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-214000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-214000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-214000 -n no-preload-214000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-214000 -n no-preload-214000: exit status 7 (28.142917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-214000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-214000 -n no-preload-214000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-214000 -n no-preload-214000: exit status 7 (28.809958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-214000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-214000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-214000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-214000 describe deploy/metrics-server -n kube-system: exit status 1 (27.197333ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-214000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-214000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-214000 -n no-preload-214000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-214000 -n no-preload-214000: exit status 7 (28.615584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-214000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-214000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-rc.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-214000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-rc.0: exit status 80 (5.178141459s)

                                                
                                                
-- stdout --
	* [no-preload-214000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-214000" primary control-plane node in "no-preload-214000" cluster
	* Restarting existing qemu2 VM for "no-preload-214000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-214000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 18:12:55.935348    6153 out.go:291] Setting OutFile to fd 1 ...
	I0803 18:12:55.935493    6153 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:12:55.935496    6153 out.go:304] Setting ErrFile to fd 2...
	I0803 18:12:55.935498    6153 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:12:55.935623    6153 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1166/.minikube/bin
	I0803 18:12:55.936619    6153 out.go:298] Setting JSON to false
	I0803 18:12:55.952667    6153 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4339,"bootTime":1722729636,"procs":456,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 18:12:55.952736    6153 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 18:12:55.957944    6153 out.go:177] * [no-preload-214000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0803 18:12:55.963943    6153 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 18:12:55.963996    6153 notify.go:220] Checking for updates...
	I0803 18:12:55.971926    6153 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	I0803 18:12:55.975000    6153 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 18:12:55.977978    6153 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 18:12:55.980950    6153 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	I0803 18:12:55.983889    6153 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 18:12:55.987183    6153 config.go:182] Loaded profile config "no-preload-214000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-rc.0
	I0803 18:12:55.987465    6153 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 18:12:55.991959    6153 out.go:177] * Using the qemu2 driver based on existing profile
	I0803 18:12:55.998940    6153 start.go:297] selected driver: qemu2
	I0803 18:12:55.998946    6153 start.go:901] validating driver "qemu2" against &{Name:no-preload-214000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0-rc.0 ClusterName:no-preload-214000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 18:12:55.999013    6153 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 18:12:56.001299    6153 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0803 18:12:56.001320    6153 cni.go:84] Creating CNI manager for ""
	I0803 18:12:56.001326    6153 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0803 18:12:56.001346    6153 start.go:340] cluster config:
	{Name:no-preload-214000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:no-preload-214000 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host M
ount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 18:12:56.004823    6153 iso.go:125] acquiring lock: {Name:mk5fb439c48468a4ea1254631a9a137d32075713 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 18:12:56.011905    6153 out.go:177] * Starting "no-preload-214000" primary control-plane node in "no-preload-214000" cluster
	I0803 18:12:56.015941    6153 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0803 18:12:56.016008    6153 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/no-preload-214000/config.json ...
	I0803 18:12:56.016049    6153 cache.go:107] acquiring lock: {Name:mk34dca4c7d77ca76387dabd5770fb343b4e6856 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 18:12:56.016077    6153 cache.go:107] acquiring lock: {Name:mk2eac339b3624b0f233ae60b21bf297703b6ebb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 18:12:56.016084    6153 cache.go:107] acquiring lock: {Name:mkea611a55fe4d417cbc2a53aebd674cb2cd474e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 18:12:56.016117    6153 cache.go:107] acquiring lock: {Name:mk8c49cdf0462d680a879c0e49b03aef8cb3564a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 18:12:56.016126    6153 cache.go:115] /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-rc.0 exists
	I0803 18:12:56.016136    6153 cache.go:115] /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0803 18:12:56.016134    6153 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0-rc.0" -> "/Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-rc.0" took 90.375µs
	I0803 18:12:56.016153    6153 cache.go:107] acquiring lock: {Name:mk2ed155e288d66442809ec056c78b33f2f08be3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 18:12:56.016164    6153 cache.go:107] acquiring lock: {Name:mk6040dbaeea26454c7414a508e6564e1cd107e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 18:12:56.016047    6153 cache.go:107] acquiring lock: {Name:mk454d502bb00fe9f5578b8ccf966bf1c1c667d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 18:12:56.016197    6153 cache.go:115] /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.7-0 exists
	I0803 18:12:56.016201    6153 cache.go:96] cache image "registry.k8s.io/etcd:3.5.7-0" -> "/Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.7-0" took 48.458µs
	I0803 18:12:56.016202    6153 cache.go:107] acquiring lock: {Name:mk028aa6e5f3b289f6375fe482f01282f7945bcb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 18:12:56.016205    6153 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.7-0 -> /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.7-0 succeeded
	I0803 18:12:56.016141    6153 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 65µs
	I0803 18:12:56.016208    6153 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0803 18:12:56.016157    6153 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0-rc.0 -> /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-rc.0 succeeded
	I0803 18:12:56.016219    6153 cache.go:115] /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0 exists
	I0803 18:12:56.016223    6153 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0-rc.0" -> "/Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0" took 107.083µs
	I0803 18:12:56.016225    6153 cache.go:115] /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0803 18:12:56.016227    6153 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0-rc.0 -> /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0 succeeded
	I0803 18:12:56.016230    6153 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 186.083µs
	I0803 18:12:56.016235    6153 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0803 18:12:56.016265    6153 cache.go:115] /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 exists
	I0803 18:12:56.016273    6153 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.10.1" -> "/Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1" took 87.75µs
	I0803 18:12:56.016279    6153 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.10.1 -> /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 succeeded
	I0803 18:12:56.016293    6153 cache.go:115] /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0 exists
	I0803 18:12:56.016305    6153 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0-rc.0" -> "/Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0" took 255.333µs
	I0803 18:12:56.016309    6153 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0-rc.0 -> /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0 succeeded
	I0803 18:12:56.016295    6153 cache.go:115] /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0 exists
	I0803 18:12:56.016313    6153 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0-rc.0" -> "/Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0" took 150.292µs
	I0803 18:12:56.016316    6153 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0-rc.0 -> /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0 succeeded
	I0803 18:12:56.016321    6153 cache.go:87] Successfully saved all images to host disk.
	I0803 18:12:56.016371    6153 start.go:360] acquireMachinesLock for no-preload-214000: {Name:mk8118ba0191912f8ddd157ebc629e1eb9051406 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 18:12:56.016412    6153 start.go:364] duration metric: took 35.042µs to acquireMachinesLock for "no-preload-214000"
	I0803 18:12:56.016421    6153 start.go:96] Skipping create...Using existing machine configuration
	I0803 18:12:56.016425    6153 fix.go:54] fixHost starting: 
	I0803 18:12:56.016553    6153 fix.go:112] recreateIfNeeded on no-preload-214000: state=Stopped err=<nil>
	W0803 18:12:56.016561    6153 fix.go:138] unexpected machine state, will restart: <nil>
	I0803 18:12:56.024928    6153 out.go:177] * Restarting existing qemu2 VM for "no-preload-214000" ...
	I0803 18:12:56.028920    6153 qemu.go:418] Using hvf for hardware acceleration
	I0803 18:12:56.028955    6153 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/no-preload-214000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/no-preload-214000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/no-preload-214000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:e8:43:13:35:61 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/no-preload-214000/disk.qcow2
	I0803 18:12:56.031085    6153 main.go:141] libmachine: STDOUT: 
	I0803 18:12:56.031105    6153 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 18:12:56.031132    6153 fix.go:56] duration metric: took 14.707083ms for fixHost
	I0803 18:12:56.031136    6153 start.go:83] releasing machines lock for "no-preload-214000", held for 14.720041ms
	W0803 18:12:56.031144    6153 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 18:12:56.031174    6153 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 18:12:56.031178    6153 start.go:729] Will try again in 5 seconds ...
	I0803 18:13:01.033251    6153 start.go:360] acquireMachinesLock for no-preload-214000: {Name:mk8118ba0191912f8ddd157ebc629e1eb9051406 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 18:13:01.033660    6153 start.go:364] duration metric: took 301.25µs to acquireMachinesLock for "no-preload-214000"
	I0803 18:13:01.033778    6153 start.go:96] Skipping create...Using existing machine configuration
	I0803 18:13:01.033800    6153 fix.go:54] fixHost starting: 
	I0803 18:13:01.034555    6153 fix.go:112] recreateIfNeeded on no-preload-214000: state=Stopped err=<nil>
	W0803 18:13:01.034601    6153 fix.go:138] unexpected machine state, will restart: <nil>
	I0803 18:13:01.043656    6153 out.go:177] * Restarting existing qemu2 VM for "no-preload-214000" ...
	I0803 18:13:01.046582    6153 qemu.go:418] Using hvf for hardware acceleration
	I0803 18:13:01.046896    6153 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/no-preload-214000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/no-preload-214000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/no-preload-214000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:e8:43:13:35:61 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/no-preload-214000/disk.qcow2
	I0803 18:13:01.053360    6153 main.go:141] libmachine: STDOUT: 
	I0803 18:13:01.053425    6153 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 18:13:01.053486    6153 fix.go:56] duration metric: took 19.691542ms for fixHost
	I0803 18:13:01.053501    6153 start.go:83] releasing machines lock for "no-preload-214000", held for 19.82075ms
	W0803 18:13:01.053665    6153 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-214000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-214000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 18:13:01.058885    6153 out.go:177] 
	W0803 18:13:01.062685    6153 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 18:13:01.062720    6153 out.go:239] * 
	* 
	W0803 18:13:01.064054    6153 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 18:13:01.077589    6153 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-214000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-rc.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-214000 -n no-preload-214000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-214000 -n no-preload-214000: exit status 7 (69.078125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-214000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-432000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-432000 create -f testdata/busybox.yaml: exit status 1 (29.794708ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-432000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-432000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-432000 -n default-k8s-diff-port-432000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-432000 -n default-k8s-diff-port-432000: exit status 7 (28.907917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-432000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-432000 -n default-k8s-diff-port-432000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-432000 -n default-k8s-diff-port-432000: exit status 7 (28.296459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-432000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-432000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-432000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-432000 describe deploy/metrics-server -n kube-system: exit status 1 (26.917208ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-432000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-432000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-432000 -n default-k8s-diff-port-432000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-432000 -n default-k8s-diff-port-432000: exit status 7 (28.775666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-432000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-214000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-214000 -n no-preload-214000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-214000 -n no-preload-214000: exit status 7 (31.888417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-214000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-214000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-214000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-214000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.561542ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-214000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-214000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-214000 -n no-preload-214000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-214000 -n no-preload-214000: exit status 7 (29.013ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-214000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-214000 image list --format=json
start_stop_delete_test.go:304: v1.31.0-rc.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.7-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0-rc.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0-rc.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0-rc.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0-rc.0",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-214000 -n no-preload-214000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-214000 -n no-preload-214000: exit status 7 (28.116417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-214000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-214000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-214000 --alsologtostderr -v=1: exit status 83 (40.291875ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-214000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-214000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 18:13:01.339158    6202 out.go:291] Setting OutFile to fd 1 ...
	I0803 18:13:01.339317    6202 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:13:01.339320    6202 out.go:304] Setting ErrFile to fd 2...
	I0803 18:13:01.339322    6202 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:13:01.339459    6202 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1166/.minikube/bin
	I0803 18:13:01.339702    6202 out.go:298] Setting JSON to false
	I0803 18:13:01.339708    6202 mustload.go:65] Loading cluster: no-preload-214000
	I0803 18:13:01.339897    6202 config.go:182] Loaded profile config "no-preload-214000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-rc.0
	I0803 18:13:01.344475    6202 out.go:177] * The control-plane node no-preload-214000 host is not running: state=Stopped
	I0803 18:13:01.348318    6202 out.go:177]   To start a cluster, run: "minikube start -p no-preload-214000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-214000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-214000 -n no-preload-214000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-214000 -n no-preload-214000: exit status 7 (29.00125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-214000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-214000 -n no-preload-214000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-214000 -n no-preload-214000: exit status 7 (28.935875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-214000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.97s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-389000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-rc.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-389000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-rc.0: exit status 80 (9.902438625s)

                                                
                                                
-- stdout --
	* [newest-cni-389000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-389000" primary control-plane node in "newest-cni-389000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-389000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 18:13:01.650692    6221 out.go:291] Setting OutFile to fd 1 ...
	I0803 18:13:01.650809    6221 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:13:01.650812    6221 out.go:304] Setting ErrFile to fd 2...
	I0803 18:13:01.650815    6221 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:13:01.650937    6221 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1166/.minikube/bin
	I0803 18:13:01.652086    6221 out.go:298] Setting JSON to false
	I0803 18:13:01.668358    6221 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4345,"bootTime":1722729636,"procs":456,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 18:13:01.668446    6221 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 18:13:01.673546    6221 out.go:177] * [newest-cni-389000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0803 18:13:01.679501    6221 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 18:13:01.679541    6221 notify.go:220] Checking for updates...
	I0803 18:13:01.686442    6221 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	I0803 18:13:01.689431    6221 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 18:13:01.692460    6221 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 18:13:01.695385    6221 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	I0803 18:13:01.698454    6221 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 18:13:01.701815    6221 config.go:182] Loaded profile config "default-k8s-diff-port-432000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 18:13:01.701878    6221 config.go:182] Loaded profile config "multinode-483000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 18:13:01.701938    6221 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 18:13:01.705316    6221 out.go:177] * Using the qemu2 driver based on user configuration
	I0803 18:13:01.712454    6221 start.go:297] selected driver: qemu2
	I0803 18:13:01.712461    6221 start.go:901] validating driver "qemu2" against <nil>
	I0803 18:13:01.712468    6221 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 18:13:01.714874    6221 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0803 18:13:01.714921    6221 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0803 18:13:01.721429    6221 out.go:177] * Automatically selected the socket_vmnet network
	I0803 18:13:01.724539    6221 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0803 18:13:01.724558    6221 cni.go:84] Creating CNI manager for ""
	I0803 18:13:01.724566    6221 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0803 18:13:01.724573    6221 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0803 18:13:01.724616    6221 start.go:340] cluster config:
	{Name:newest-cni-389000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:newest-cni-389000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 18:13:01.728441    6221 iso.go:125] acquiring lock: {Name:mk5fb439c48468a4ea1254631a9a137d32075713 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 18:13:01.735422    6221 out.go:177] * Starting "newest-cni-389000" primary control-plane node in "newest-cni-389000" cluster
	I0803 18:13:01.739478    6221 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0803 18:13:01.739494    6221 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4
	I0803 18:13:01.739508    6221 cache.go:56] Caching tarball of preloaded images
	I0803 18:13:01.739583    6221 preload.go:172] Found /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0803 18:13:01.739589    6221 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-rc.0 on docker
	I0803 18:13:01.739650    6221 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/newest-cni-389000/config.json ...
	I0803 18:13:01.739662    6221 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/newest-cni-389000/config.json: {Name:mk73b8f3e1049786cb1a5f952410a713acbc9636 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 18:13:01.740029    6221 start.go:360] acquireMachinesLock for newest-cni-389000: {Name:mk8118ba0191912f8ddd157ebc629e1eb9051406 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 18:13:01.740064    6221 start.go:364] duration metric: took 29.375µs to acquireMachinesLock for "newest-cni-389000"
	I0803 18:13:01.740075    6221 start.go:93] Provisioning new machine with config: &{Name:newest-cni-389000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-rc.0 ClusterName:newest-cni-389000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:
/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 18:13:01.740109    6221 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 18:13:01.747430    6221 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0803 18:13:01.765518    6221 start.go:159] libmachine.API.Create for "newest-cni-389000" (driver="qemu2")
	I0803 18:13:01.765546    6221 client.go:168] LocalClient.Create starting
	I0803 18:13:01.765610    6221 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/ca.pem
	I0803 18:13:01.765643    6221 main.go:141] libmachine: Decoding PEM data...
	I0803 18:13:01.765651    6221 main.go:141] libmachine: Parsing certificate...
	I0803 18:13:01.765690    6221 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/cert.pem
	I0803 18:13:01.765714    6221 main.go:141] libmachine: Decoding PEM data...
	I0803 18:13:01.765724    6221 main.go:141] libmachine: Parsing certificate...
	I0803 18:13:01.766140    6221 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1166/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 18:13:01.919124    6221 main.go:141] libmachine: Creating SSH key...
	I0803 18:13:02.068022    6221 main.go:141] libmachine: Creating Disk image...
	I0803 18:13:02.068028    6221 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 18:13:02.068236    6221 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/newest-cni-389000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/newest-cni-389000/disk.qcow2
	I0803 18:13:02.078008    6221 main.go:141] libmachine: STDOUT: 
	I0803 18:13:02.078027    6221 main.go:141] libmachine: STDERR: 
	I0803 18:13:02.078074    6221 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/newest-cni-389000/disk.qcow2 +20000M
	I0803 18:13:02.086047    6221 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 18:13:02.086064    6221 main.go:141] libmachine: STDERR: 
	I0803 18:13:02.086079    6221 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/newest-cni-389000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/newest-cni-389000/disk.qcow2
	I0803 18:13:02.086085    6221 main.go:141] libmachine: Starting QEMU VM...
	I0803 18:13:02.086096    6221 qemu.go:418] Using hvf for hardware acceleration
	I0803 18:13:02.086129    6221 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/newest-cni-389000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/newest-cni-389000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/newest-cni-389000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:a8:66:a5:16:b0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/newest-cni-389000/disk.qcow2
	I0803 18:13:02.087905    6221 main.go:141] libmachine: STDOUT: 
	I0803 18:13:02.087921    6221 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 18:13:02.087942    6221 client.go:171] duration metric: took 322.400708ms to LocalClient.Create
	I0803 18:13:04.090126    6221 start.go:128] duration metric: took 2.350057875s to createHost
	I0803 18:13:04.090200    6221 start.go:83] releasing machines lock for "newest-cni-389000", held for 2.35019275s
	W0803 18:13:04.090303    6221 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 18:13:04.106611    6221 out.go:177] * Deleting "newest-cni-389000" in qemu2 ...
	W0803 18:13:04.138905    6221 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 18:13:04.138950    6221 start.go:729] Will try again in 5 seconds ...
	I0803 18:13:09.141018    6221 start.go:360] acquireMachinesLock for newest-cni-389000: {Name:mk8118ba0191912f8ddd157ebc629e1eb9051406 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 18:13:09.155655    6221 start.go:364] duration metric: took 14.544291ms to acquireMachinesLock for "newest-cni-389000"
	I0803 18:13:09.155735    6221 start.go:93] Provisioning new machine with config: &{Name:newest-cni-389000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-rc.0 ClusterName:newest-cni-389000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:
/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 18:13:09.155963    6221 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 18:13:09.167946    6221 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0803 18:13:09.216362    6221 start.go:159] libmachine.API.Create for "newest-cni-389000" (driver="qemu2")
	I0803 18:13:09.216416    6221 client.go:168] LocalClient.Create starting
	I0803 18:13:09.216527    6221 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/ca.pem
	I0803 18:13:09.216595    6221 main.go:141] libmachine: Decoding PEM data...
	I0803 18:13:09.216617    6221 main.go:141] libmachine: Parsing certificate...
	I0803 18:13:09.216681    6221 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1166/.minikube/certs/cert.pem
	I0803 18:13:09.216724    6221 main.go:141] libmachine: Decoding PEM data...
	I0803 18:13:09.216738    6221 main.go:141] libmachine: Parsing certificate...
	I0803 18:13:09.217253    6221 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1166/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 18:13:09.406541    6221 main.go:141] libmachine: Creating SSH key...
	I0803 18:13:09.464048    6221 main.go:141] libmachine: Creating Disk image...
	I0803 18:13:09.464055    6221 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 18:13:09.464225    6221 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/newest-cni-389000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/newest-cni-389000/disk.qcow2
	I0803 18:13:09.473776    6221 main.go:141] libmachine: STDOUT: 
	I0803 18:13:09.473799    6221 main.go:141] libmachine: STDERR: 
	I0803 18:13:09.473853    6221 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/newest-cni-389000/disk.qcow2 +20000M
	I0803 18:13:09.482934    6221 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 18:13:09.482952    6221 main.go:141] libmachine: STDERR: 
	I0803 18:13:09.482972    6221 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/newest-cni-389000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/newest-cni-389000/disk.qcow2
	I0803 18:13:09.482977    6221 main.go:141] libmachine: Starting QEMU VM...
	I0803 18:13:09.482990    6221 qemu.go:418] Using hvf for hardware acceleration
	I0803 18:13:09.483019    6221 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/newest-cni-389000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/newest-cni-389000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/newest-cni-389000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:5e:8e:56:c2:79 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/newest-cni-389000/disk.qcow2
	I0803 18:13:09.485055    6221 main.go:141] libmachine: STDOUT: 
	I0803 18:13:09.485070    6221 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 18:13:09.485082    6221 client.go:171] duration metric: took 268.669791ms to LocalClient.Create
	I0803 18:13:11.487351    6221 start.go:128] duration metric: took 2.331379292s to createHost
	I0803 18:13:11.487486    6221 start.go:83] releasing machines lock for "newest-cni-389000", held for 2.331853917s
	W0803 18:13:11.487832    6221 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-389000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-389000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 18:13:11.496362    6221 out.go:177] 
	W0803 18:13:11.500397    6221 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 18:13:11.500423    6221 out.go:239] * 
	* 
	W0803 18:13:11.503225    6221 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 18:13:11.512319    6221 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-389000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-rc.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-389000 -n newest-cni-389000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-389000 -n newest-cni-389000: exit status 7 (66.139333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-389000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.73s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-432000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-432000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (6.677064917s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-432000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-432000" primary control-plane node in "default-k8s-diff-port-432000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-432000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-432000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 18:13:02.543962    6239 out.go:291] Setting OutFile to fd 1 ...
	I0803 18:13:02.544086    6239 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:13:02.544089    6239 out.go:304] Setting ErrFile to fd 2...
	I0803 18:13:02.544091    6239 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:13:02.544217    6239 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1166/.minikube/bin
	I0803 18:13:02.545206    6239 out.go:298] Setting JSON to false
	I0803 18:13:02.561433    6239 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4346,"bootTime":1722729636,"procs":456,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 18:13:02.561505    6239 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 18:13:02.566371    6239 out.go:177] * [default-k8s-diff-port-432000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0803 18:13:02.573342    6239 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 18:13:02.573397    6239 notify.go:220] Checking for updates...
	I0803 18:13:02.580317    6239 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	I0803 18:13:02.583376    6239 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 18:13:02.586349    6239 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 18:13:02.589241    6239 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	I0803 18:13:02.592304    6239 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 18:13:02.595611    6239 config.go:182] Loaded profile config "default-k8s-diff-port-432000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 18:13:02.595874    6239 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 18:13:02.600336    6239 out.go:177] * Using the qemu2 driver based on existing profile
	I0803 18:13:02.607330    6239 start.go:297] selected driver: qemu2
	I0803 18:13:02.607334    6239 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-432000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-432000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 18:13:02.607413    6239 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 18:13:02.609765    6239 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0803 18:13:02.609785    6239 cni.go:84] Creating CNI manager for ""
	I0803 18:13:02.609792    6239 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0803 18:13:02.609817    6239 start.go:340] cluster config:
	{Name:default-k8s-diff-port-432000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-432000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 18:13:02.613360    6239 iso.go:125] acquiring lock: {Name:mk5fb439c48468a4ea1254631a9a137d32075713 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 18:13:02.621299    6239 out.go:177] * Starting "default-k8s-diff-port-432000" primary control-plane node in "default-k8s-diff-port-432000" cluster
	I0803 18:13:02.625261    6239 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0803 18:13:02.625273    6239 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0803 18:13:02.625282    6239 cache.go:56] Caching tarball of preloaded images
	I0803 18:13:02.625332    6239 preload.go:172] Found /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0803 18:13:02.625336    6239 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0803 18:13:02.625386    6239 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/default-k8s-diff-port-432000/config.json ...
	I0803 18:13:02.625832    6239 start.go:360] acquireMachinesLock for default-k8s-diff-port-432000: {Name:mk8118ba0191912f8ddd157ebc629e1eb9051406 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 18:13:04.090328    6239 start.go:364] duration metric: took 1.464516792s to acquireMachinesLock for "default-k8s-diff-port-432000"
	I0803 18:13:04.090491    6239 start.go:96] Skipping create...Using existing machine configuration
	I0803 18:13:04.090545    6239 fix.go:54] fixHost starting: 
	I0803 18:13:04.091219    6239 fix.go:112] recreateIfNeeded on default-k8s-diff-port-432000: state=Stopped err=<nil>
	W0803 18:13:04.091269    6239 fix.go:138] unexpected machine state, will restart: <nil>
	I0803 18:13:04.095661    6239 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-432000" ...
	I0803 18:13:04.109673    6239 qemu.go:418] Using hvf for hardware acceleration
	I0803 18:13:04.109864    6239 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/default-k8s-diff-port-432000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/default-k8s-diff-port-432000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/default-k8s-diff-port-432000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:40:78:80:f4:2e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/default-k8s-diff-port-432000/disk.qcow2
	I0803 18:13:04.120984    6239 main.go:141] libmachine: STDOUT: 
	I0803 18:13:04.121080    6239 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 18:13:04.121201    6239 fix.go:56] duration metric: took 30.661292ms for fixHost
	I0803 18:13:04.121219    6239 start.go:83] releasing machines lock for "default-k8s-diff-port-432000", held for 30.854875ms
	W0803 18:13:04.121258    6239 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 18:13:04.121444    6239 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 18:13:04.121464    6239 start.go:729] Will try again in 5 seconds ...
	I0803 18:13:09.122082    6239 start.go:360] acquireMachinesLock for default-k8s-diff-port-432000: {Name:mk8118ba0191912f8ddd157ebc629e1eb9051406 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 18:13:09.122558    6239 start.go:364] duration metric: took 319.208µs to acquireMachinesLock for "default-k8s-diff-port-432000"
	I0803 18:13:09.122687    6239 start.go:96] Skipping create...Using existing machine configuration
	I0803 18:13:09.122712    6239 fix.go:54] fixHost starting: 
	I0803 18:13:09.123427    6239 fix.go:112] recreateIfNeeded on default-k8s-diff-port-432000: state=Stopped err=<nil>
	W0803 18:13:09.123458    6239 fix.go:138] unexpected machine state, will restart: <nil>
	I0803 18:13:09.140841    6239 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-432000" ...
	I0803 18:13:09.144814    6239 qemu.go:418] Using hvf for hardware acceleration
	I0803 18:13:09.145030    6239 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/default-k8s-diff-port-432000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/default-k8s-diff-port-432000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/default-k8s-diff-port-432000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:40:78:80:f4:2e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/default-k8s-diff-port-432000/disk.qcow2
	I0803 18:13:09.155019    6239 main.go:141] libmachine: STDOUT: 
	I0803 18:13:09.155308    6239 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 18:13:09.155430    6239 fix.go:56] duration metric: took 32.721917ms for fixHost
	I0803 18:13:09.155452    6239 start.go:83] releasing machines lock for "default-k8s-diff-port-432000", held for 32.863833ms
	W0803 18:13:09.155668    6239 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-432000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-432000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 18:13:09.167894    6239 out.go:177] 
	W0803 18:13:09.171866    6239 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 18:13:09.171913    6239 out.go:239] * 
	* 
	W0803 18:13:09.174448    6239 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 18:13:09.183740    6239 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-432000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-432000 -n default-k8s-diff-port-432000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-432000 -n default-k8s-diff-port-432000: exit status 7 (52.29075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-432000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.73s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-432000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-432000 -n default-k8s-diff-port-432000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-432000 -n default-k8s-diff-port-432000: exit status 7 (35.233875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-432000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-432000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-432000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-432000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (29.107667ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-432000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-432000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-432000 -n default-k8s-diff-port-432000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-432000 -n default-k8s-diff-port-432000: exit status 7 (31.953208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-432000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-432000 image list --format=json
start_stop_delete_test.go:304: v1.30.3 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.3",
- 	"registry.k8s.io/kube-controller-manager:v1.30.3",
- 	"registry.k8s.io/kube-proxy:v1.30.3",
- 	"registry.k8s.io/kube-scheduler:v1.30.3",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-432000 -n default-k8s-diff-port-432000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-432000 -n default-k8s-diff-port-432000: exit status 7 (30.253209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-432000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-432000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-432000 --alsologtostderr -v=1: exit status 83 (42.20075ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-432000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-432000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 18:13:09.454837    6260 out.go:291] Setting OutFile to fd 1 ...
	I0803 18:13:09.455026    6260 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:13:09.455029    6260 out.go:304] Setting ErrFile to fd 2...
	I0803 18:13:09.455032    6260 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:13:09.455174    6260 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1166/.minikube/bin
	I0803 18:13:09.455383    6260 out.go:298] Setting JSON to false
	I0803 18:13:09.455388    6260 mustload.go:65] Loading cluster: default-k8s-diff-port-432000
	I0803 18:13:09.455563    6260 config.go:182] Loaded profile config "default-k8s-diff-port-432000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 18:13:09.459793    6260 out.go:177] * The control-plane node default-k8s-diff-port-432000 host is not running: state=Stopped
	I0803 18:13:09.463636    6260 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-432000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-432000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-432000 -n default-k8s-diff-port-432000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-432000 -n default-k8s-diff-port-432000: exit status 7 (28.600542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-432000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-432000 -n default-k8s-diff-port-432000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-432000 -n default-k8s-diff-port-432000: exit status 7 (27.930458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-432000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-389000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-rc.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-389000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-rc.0: exit status 80 (5.179341s)

                                                
                                                
-- stdout --
	* [newest-cni-389000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-389000" primary control-plane node in "newest-cni-389000" cluster
	* Restarting existing qemu2 VM for "newest-cni-389000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-389000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 18:13:14.960898    6308 out.go:291] Setting OutFile to fd 1 ...
	I0803 18:13:14.961032    6308 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:13:14.961036    6308 out.go:304] Setting ErrFile to fd 2...
	I0803 18:13:14.961038    6308 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:13:14.961168    6308 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1166/.minikube/bin
	I0803 18:13:14.962186    6308 out.go:298] Setting JSON to false
	I0803 18:13:14.978224    6308 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4358,"bootTime":1722729636,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 18:13:14.978317    6308 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 18:13:14.982899    6308 out.go:177] * [newest-cni-389000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0803 18:13:14.989883    6308 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 18:13:14.989926    6308 notify.go:220] Checking for updates...
	I0803 18:13:14.995041    6308 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	I0803 18:13:14.997839    6308 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 18:13:15.000870    6308 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 18:13:15.003857    6308 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	I0803 18:13:15.006853    6308 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 18:13:15.010182    6308 config.go:182] Loaded profile config "newest-cni-389000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-rc.0
	I0803 18:13:15.010448    6308 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 18:13:15.014829    6308 out.go:177] * Using the qemu2 driver based on existing profile
	I0803 18:13:15.021842    6308 start.go:297] selected driver: qemu2
	I0803 18:13:15.021849    6308 start.go:901] validating driver "qemu2" against &{Name:newest-cni-389000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0-rc.0 ClusterName:newest-cni-389000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPo
rts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 18:13:15.021920    6308 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 18:13:15.024145    6308 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0803 18:13:15.024167    6308 cni.go:84] Creating CNI manager for ""
	I0803 18:13:15.024174    6308 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0803 18:13:15.024193    6308 start.go:340] cluster config:
	{Name:newest-cni-389000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:newest-cni-389000 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false Ext
raDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 18:13:15.027537    6308 iso.go:125] acquiring lock: {Name:mk5fb439c48468a4ea1254631a9a137d32075713 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 18:13:15.034806    6308 out.go:177] * Starting "newest-cni-389000" primary control-plane node in "newest-cni-389000" cluster
	I0803 18:13:15.038859    6308 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0803 18:13:15.038873    6308 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4
	I0803 18:13:15.038885    6308 cache.go:56] Caching tarball of preloaded images
	I0803 18:13:15.038944    6308 preload.go:172] Found /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0803 18:13:15.038949    6308 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-rc.0 on docker
	I0803 18:13:15.039002    6308 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/newest-cni-389000/config.json ...
	I0803 18:13:15.039465    6308 start.go:360] acquireMachinesLock for newest-cni-389000: {Name:mk8118ba0191912f8ddd157ebc629e1eb9051406 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 18:13:15.039504    6308 start.go:364] duration metric: took 33.209µs to acquireMachinesLock for "newest-cni-389000"
	I0803 18:13:15.039513    6308 start.go:96] Skipping create...Using existing machine configuration
	I0803 18:13:15.039517    6308 fix.go:54] fixHost starting: 
	I0803 18:13:15.039641    6308 fix.go:112] recreateIfNeeded on newest-cni-389000: state=Stopped err=<nil>
	W0803 18:13:15.039649    6308 fix.go:138] unexpected machine state, will restart: <nil>
	I0803 18:13:15.043838    6308 out.go:177] * Restarting existing qemu2 VM for "newest-cni-389000" ...
	I0803 18:13:15.051825    6308 qemu.go:418] Using hvf for hardware acceleration
	I0803 18:13:15.051868    6308 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/newest-cni-389000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/newest-cni-389000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/newest-cni-389000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:5e:8e:56:c2:79 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/newest-cni-389000/disk.qcow2
	I0803 18:13:15.053949    6308 main.go:141] libmachine: STDOUT: 
	I0803 18:13:15.053969    6308 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 18:13:15.053998    6308 fix.go:56] duration metric: took 14.480625ms for fixHost
	I0803 18:13:15.054002    6308 start.go:83] releasing machines lock for "newest-cni-389000", held for 14.49425ms
	W0803 18:13:15.054010    6308 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 18:13:15.054042    6308 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 18:13:15.054047    6308 start.go:729] Will try again in 5 seconds ...
	I0803 18:13:20.056115    6308 start.go:360] acquireMachinesLock for newest-cni-389000: {Name:mk8118ba0191912f8ddd157ebc629e1eb9051406 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 18:13:20.056443    6308 start.go:364] duration metric: took 259.667µs to acquireMachinesLock for "newest-cni-389000"
	I0803 18:13:20.056563    6308 start.go:96] Skipping create...Using existing machine configuration
	I0803 18:13:20.056586    6308 fix.go:54] fixHost starting: 
	I0803 18:13:20.057305    6308 fix.go:112] recreateIfNeeded on newest-cni-389000: state=Stopped err=<nil>
	W0803 18:13:20.057336    6308 fix.go:138] unexpected machine state, will restart: <nil>
	I0803 18:13:20.062772    6308 out.go:177] * Restarting existing qemu2 VM for "newest-cni-389000" ...
	I0803 18:13:20.066784    6308 qemu.go:418] Using hvf for hardware acceleration
	I0803 18:13:20.066999    6308 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/newest-cni-389000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/newest-cni-389000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/newest-cni-389000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:5e:8e:56:c2:79 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1166/.minikube/machines/newest-cni-389000/disk.qcow2
	I0803 18:13:20.075815    6308 main.go:141] libmachine: STDOUT: 
	I0803 18:13:20.075876    6308 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 18:13:20.075948    6308 fix.go:56] duration metric: took 19.368167ms for fixHost
	I0803 18:13:20.075963    6308 start.go:83] releasing machines lock for "newest-cni-389000", held for 19.496959ms
	W0803 18:13:20.076154    6308 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-389000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-389000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 18:13:20.083718    6308 out.go:177] 
	W0803 18:13:20.087807    6308 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 18:13:20.087829    6308 out.go:239] * 
	* 
	W0803 18:13:20.090530    6308 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 18:13:20.099681    6308 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-389000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-rc.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-389000 -n newest-cni-389000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-389000 -n newest-cni-389000: exit status 7 (67.732417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-389000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-389000 image list --format=json
start_stop_delete_test.go:304: v1.31.0-rc.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.7-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0-rc.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0-rc.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0-rc.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0-rc.0",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-389000 -n newest-cni-389000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-389000 -n newest-cni-389000: exit status 7 (30.199833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-389000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-389000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-389000 --alsologtostderr -v=1: exit status 83 (40.4645ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-389000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-389000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 18:13:20.281557    6322 out.go:291] Setting OutFile to fd 1 ...
	I0803 18:13:20.281719    6322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:13:20.281722    6322 out.go:304] Setting ErrFile to fd 2...
	I0803 18:13:20.281725    6322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 18:13:20.281861    6322 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1166/.minikube/bin
	I0803 18:13:20.282083    6322 out.go:298] Setting JSON to false
	I0803 18:13:20.282088    6322 mustload.go:65] Loading cluster: newest-cni-389000
	I0803 18:13:20.282277    6322 config.go:182] Loaded profile config "newest-cni-389000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-rc.0
	I0803 18:13:20.285309    6322 out.go:177] * The control-plane node newest-cni-389000 host is not running: state=Stopped
	I0803 18:13:20.289355    6322 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-389000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-389000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-389000 -n newest-cni-389000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-389000 -n newest-cni-389000: exit status 7 (29.102208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-389000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-389000 -n newest-cni-389000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-389000 -n newest-cni-389000: exit status 7 (29.570792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-389000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (162/282)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.09
9 TestDownloadOnly/v1.20.0/DeleteAll 0.12
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.1
12 TestDownloadOnly/v1.30.3/json-events 12.98
13 TestDownloadOnly/v1.30.3/preload-exists 0
16 TestDownloadOnly/v1.30.3/kubectl 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.08
18 TestDownloadOnly/v1.30.3/DeleteAll 0.11
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.1
21 TestDownloadOnly/v1.31.0-rc.0/json-events 13.7
22 TestDownloadOnly/v1.31.0-rc.0/preload-exists 0
25 TestDownloadOnly/v1.31.0-rc.0/kubectl 0
26 TestDownloadOnly/v1.31.0-rc.0/LogsDuration 0.07
27 TestDownloadOnly/v1.31.0-rc.0/DeleteAll 0.11
28 TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds 0.1
30 TestBinaryMirror 0.3
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
36 TestAddons/Setup 208.49
38 TestAddons/serial/Volcano 38.93
40 TestAddons/serial/GCPAuth/Namespaces 0.08
42 TestAddons/parallel/Registry 14.06
43 TestAddons/parallel/Ingress 18.75
44 TestAddons/parallel/InspektorGadget 10.22
45 TestAddons/parallel/MetricsServer 5.26
48 TestAddons/parallel/CSI 32.57
49 TestAddons/parallel/Headlamp 10.41
50 TestAddons/parallel/CloudSpanner 6.17
51 TestAddons/parallel/LocalPath 40.77
52 TestAddons/parallel/NvidiaDevicePlugin 5.16
53 TestAddons/parallel/Yakd 10.2
54 TestAddons/StoppedEnableDisable 12.4
62 TestHyperKitDriverInstallOrUpdate 10.32
65 TestErrorSpam/setup 34.64
66 TestErrorSpam/start 0.34
67 TestErrorSpam/status 0.24
68 TestErrorSpam/pause 0.63
69 TestErrorSpam/unpause 0.57
70 TestErrorSpam/stop 64.26
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 57.16
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 56.22
77 TestFunctional/serial/KubeContext 0.03
78 TestFunctional/serial/KubectlGetPods 0.04
81 TestFunctional/serial/CacheCmd/cache/add_remote 2.8
82 TestFunctional/serial/CacheCmd/cache/add_local 1.12
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
84 TestFunctional/serial/CacheCmd/cache/list 0.03
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.07
86 TestFunctional/serial/CacheCmd/cache/cache_reload 0.63
87 TestFunctional/serial/CacheCmd/cache/delete 0.07
88 TestFunctional/serial/MinikubeKubectlCmd 0.66
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.93
90 TestFunctional/serial/ExtraConfig 59.78
91 TestFunctional/serial/ComponentHealth 0.04
92 TestFunctional/serial/LogsCmd 0.66
93 TestFunctional/serial/LogsFileCmd 0.66
94 TestFunctional/serial/InvalidService 3.6
96 TestFunctional/parallel/ConfigCmd 0.22
97 TestFunctional/parallel/DashboardCmd 9.64
98 TestFunctional/parallel/DryRun 0.27
99 TestFunctional/parallel/InternationalLanguage 0.12
100 TestFunctional/parallel/StatusCmd 0.24
105 TestFunctional/parallel/AddonsCmd 0.09
106 TestFunctional/parallel/PersistentVolumeClaim 25.46
108 TestFunctional/parallel/SSHCmd 0.12
109 TestFunctional/parallel/CpCmd 0.39
111 TestFunctional/parallel/FileSync 0.06
112 TestFunctional/parallel/CertSync 0.38
116 TestFunctional/parallel/NodeLabels 0.04
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.14
120 TestFunctional/parallel/License 0.27
121 TestFunctional/parallel/Version/short 0.03
122 TestFunctional/parallel/Version/components 0.19
123 TestFunctional/parallel/ImageCommands/ImageListShort 0.08
124 TestFunctional/parallel/ImageCommands/ImageListTable 0.07
125 TestFunctional/parallel/ImageCommands/ImageListJson 0.08
126 TestFunctional/parallel/ImageCommands/ImageListYaml 0.08
127 TestFunctional/parallel/ImageCommands/ImageBuild 1.72
128 TestFunctional/parallel/ImageCommands/Setup 1.76
129 TestFunctional/parallel/DockerEnv/bash 0.32
130 TestFunctional/parallel/UpdateContextCmd/no_changes 0.05
131 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
132 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
133 TestFunctional/parallel/ServiceCmd/DeployApp 12.09
134 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.45
135 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.36
136 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.14
137 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.13
138 TestFunctional/parallel/ImageCommands/ImageRemove 0.14
139 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.24
140 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.17
142 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.21
143 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
145 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 12.1
146 TestFunctional/parallel/ServiceCmd/List 0.08
147 TestFunctional/parallel/ServiceCmd/JSONOutput 0.08
148 TestFunctional/parallel/ServiceCmd/HTTPS 0.09
149 TestFunctional/parallel/ServiceCmd/Format 0.09
150 TestFunctional/parallel/ServiceCmd/URL 0.09
151 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.04
152 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
153 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.03
154 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
155 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
156 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
157 TestFunctional/parallel/ProfileCmd/profile_not_create 0.13
158 TestFunctional/parallel/ProfileCmd/profile_list 0.12
159 TestFunctional/parallel/ProfileCmd/profile_json_output 0.12
160 TestFunctional/parallel/MountCmd/any-port 5.27
161 TestFunctional/parallel/MountCmd/specific-port 1.07
162 TestFunctional/parallel/MountCmd/VerifyCleanup 0.76
163 TestFunctional/delete_echo-server_images 0.03
164 TestFunctional/delete_my-image_image 0.01
165 TestFunctional/delete_minikube_cached_images 0.01
169 TestMultiControlPlane/serial/StartCluster 208.15
170 TestMultiControlPlane/serial/DeployApp 4.5
171 TestMultiControlPlane/serial/PingHostFromPods 0.75
172 TestMultiControlPlane/serial/AddWorkerNode 52.08
173 TestMultiControlPlane/serial/NodeLabels 0.18
174 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.24
175 TestMultiControlPlane/serial/CopyFile 4.38
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 79.26
187 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.05
194 TestJSONOutput/start/Audit 0
196 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/pause/Audit 0
202 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
206 TestJSONOutput/unpause/Audit 0
208 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
209 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
211 TestJSONOutput/stop/Command 2.02
212 TestJSONOutput/stop/Audit 0
214 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
215 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
216 TestErrorJSONOutput 0.2
221 TestMainNoArgs 0.03
268 TestStoppedBinaryUpgrade/Setup 0.94
280 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
284 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
285 TestNoKubernetes/serial/ProfileList 31.29
286 TestNoKubernetes/serial/Stop 2.06
288 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
298 TestStoppedBinaryUpgrade/MinikubeLogs 0.84
303 TestStartStop/group/old-k8s-version/serial/Stop 2.07
304 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.11
314 TestStartStop/group/embed-certs/serial/Stop 3.55
315 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.09
327 TestStartStop/group/no-preload/serial/Stop 1.85
328 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.11
332 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.44
339 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.12
345 TestStartStop/group/newest-cni/serial/DeployApp 0
346 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
347 TestStartStop/group/newest-cni/serial/Stop 3.15
348 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.13
350 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
351 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-977000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-977000: exit status 85 (93.418833ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-977000 | jenkins | v1.33.1 | 03 Aug 24 17:19 PDT |          |
	|         | -p download-only-977000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/03 17:19:54
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0803 17:19:54.943460    1675 out.go:291] Setting OutFile to fd 1 ...
	I0803 17:19:54.943602    1675 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 17:19:54.943606    1675 out.go:304] Setting ErrFile to fd 2...
	I0803 17:19:54.943608    1675 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 17:19:54.943724    1675 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1166/.minikube/bin
	W0803 17:19:54.943807    1675 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19364-1166/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19364-1166/.minikube/config/config.json: no such file or directory
	I0803 17:19:54.945025    1675 out.go:298] Setting JSON to true
	I0803 17:19:54.963777    1675 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1158,"bootTime":1722729636,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 17:19:54.963882    1675 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 17:19:54.969580    1675 out.go:97] [download-only-977000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0803 17:19:54.969694    1675 notify.go:220] Checking for updates...
	W0803 17:19:54.969806    1675 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball: no such file or directory
	I0803 17:19:54.972496    1675 out.go:169] MINIKUBE_LOCATION=19364
	I0803 17:19:54.975558    1675 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	I0803 17:19:54.979593    1675 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 17:19:54.982624    1675 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 17:19:54.985578    1675 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	W0803 17:19:54.991478    1675 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0803 17:19:54.991723    1675 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 17:19:54.997474    1675 out.go:97] Using the qemu2 driver based on user configuration
	I0803 17:19:54.997494    1675 start.go:297] selected driver: qemu2
	I0803 17:19:54.997498    1675 start.go:901] validating driver "qemu2" against <nil>
	I0803 17:19:54.997568    1675 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0803 17:19:55.001626    1675 out.go:169] Automatically selected the socket_vmnet network
	I0803 17:19:55.007307    1675 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0803 17:19:55.007426    1675 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0803 17:19:55.007442    1675 cni.go:84] Creating CNI manager for ""
	I0803 17:19:55.007458    1675 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0803 17:19:55.007509    1675 start.go:340] cluster config:
	{Name:download-only-977000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-977000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 17:19:55.013443    1675 iso.go:125] acquiring lock: {Name:mk5fb439c48468a4ea1254631a9a137d32075713 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 17:19:55.016621    1675 out.go:97] Downloading VM boot image ...
	I0803 17:19:55.016634    1675 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso
	I0803 17:20:01.197960    1675 out.go:97] Starting "download-only-977000" primary control-plane node in "download-only-977000" cluster
	I0803 17:20:01.197983    1675 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0803 17:20:01.252941    1675 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0803 17:20:01.252949    1675 cache.go:56] Caching tarball of preloaded images
	I0803 17:20:01.253089    1675 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0803 17:20:01.257431    1675 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0803 17:20:01.257438    1675 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0803 17:20:01.332320    1675 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0803 17:20:07.880470    1675 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0803 17:20:07.880657    1675 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0803 17:20:08.576422    1675 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0803 17:20:08.576624    1675 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/download-only-977000/config.json ...
	I0803 17:20:08.576646    1675 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/download-only-977000/config.json: {Name:mk74275890f984c00a097c3b7fd89b40f4ead095 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 17:20:08.576901    1675 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0803 17:20:08.577102    1675 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0803 17:20:09.024744    1675 out.go:169] 
	W0803 17:20:09.029714    1675 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19364-1166/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108e85aa0 0x108e85aa0 0x108e85aa0 0x108e85aa0 0x108e85aa0 0x108e85aa0 0x108e85aa0] Decompressors:map[bz2:0x14000592b58 gz:0x14000592c50 tar:0x14000592b90 tar.bz2:0x14000592c10 tar.gz:0x14000592c20 tar.xz:0x14000592c30 tar.zst:0x14000592c40 tbz2:0x14000592c10 tgz:0x14000592c20 txz:0x14000592c30 tzst:0x14000592c40 xz:0x14000592c58 zip:0x14000592c60 zst:0x14000592cb0] Getters:map[file:0x14000a04770 http:0x14000840500 https:0x14000840550] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0803 17:20:09.029740    1675 out_reason.go:110] 
	W0803 17:20:09.038705    1675 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 17:20:09.041713    1675 out.go:169] 
	
	
	* The control-plane node download-only-977000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-977000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-977000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (12.98s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-309000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-309000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=qemu2 : (12.980578042s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (12.98s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
--- PASS: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-309000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-309000: exit status 85 (78.945917ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-977000 | jenkins | v1.33.1 | 03 Aug 24 17:19 PDT |                     |
	|         | -p download-only-977000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 03 Aug 24 17:20 PDT | 03 Aug 24 17:20 PDT |
	| delete  | -p download-only-977000        | download-only-977000 | jenkins | v1.33.1 | 03 Aug 24 17:20 PDT | 03 Aug 24 17:20 PDT |
	| start   | -o=json --download-only        | download-only-309000 | jenkins | v1.33.1 | 03 Aug 24 17:20 PDT |                     |
	|         | -p download-only-309000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/03 17:20:09
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0803 17:20:09.456717    1699 out.go:291] Setting OutFile to fd 1 ...
	I0803 17:20:09.456850    1699 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 17:20:09.456853    1699 out.go:304] Setting ErrFile to fd 2...
	I0803 17:20:09.456856    1699 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 17:20:09.457000    1699 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1166/.minikube/bin
	I0803 17:20:09.458079    1699 out.go:298] Setting JSON to true
	I0803 17:20:09.474167    1699 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1173,"bootTime":1722729636,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 17:20:09.474236    1699 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 17:20:09.479477    1699 out.go:97] [download-only-309000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0803 17:20:09.479562    1699 notify.go:220] Checking for updates...
	I0803 17:20:09.483499    1699 out.go:169] MINIKUBE_LOCATION=19364
	I0803 17:20:09.486616    1699 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	I0803 17:20:09.490464    1699 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 17:20:09.497476    1699 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 17:20:09.505468    1699 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	W0803 17:20:09.513482    1699 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0803 17:20:09.513637    1699 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 17:20:09.517407    1699 out.go:97] Using the qemu2 driver based on user configuration
	I0803 17:20:09.517415    1699 start.go:297] selected driver: qemu2
	I0803 17:20:09.517418    1699 start.go:901] validating driver "qemu2" against <nil>
	I0803 17:20:09.517457    1699 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0803 17:20:09.521424    1699 out.go:169] Automatically selected the socket_vmnet network
	I0803 17:20:09.526704    1699 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0803 17:20:09.526802    1699 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0803 17:20:09.526858    1699 cni.go:84] Creating CNI manager for ""
	I0803 17:20:09.526866    1699 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0803 17:20:09.526872    1699 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0803 17:20:09.526921    1699 start.go:340] cluster config:
	{Name:download-only-309000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-309000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 17:20:09.530459    1699 iso.go:125] acquiring lock: {Name:mk5fb439c48468a4ea1254631a9a137d32075713 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 17:20:09.533460    1699 out.go:97] Starting "download-only-309000" primary control-plane node in "download-only-309000" cluster
	I0803 17:20:09.533467    1699 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0803 17:20:09.586156    1699 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0803 17:20:09.586187    1699 cache.go:56] Caching tarball of preloaded images
	I0803 17:20:09.586335    1699 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0803 17:20:09.589994    1699 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0803 17:20:09.590001    1699 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	I0803 17:20:09.668450    1699 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4?checksum=md5:5a76dba1959f6b6fc5e29e1e172ab9ca -> /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0803 17:20:15.219608    1699 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	I0803 17:20:15.219773    1699 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	I0803 17:20:15.762633    1699 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0803 17:20:15.762829    1699 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/download-only-309000/config.json ...
	I0803 17:20:15.762844    1699 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/download-only-309000/config.json: {Name:mk4847d97f567f259ff5310f7148ab367e45bb07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 17:20:15.763088    1699 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0803 17:20:15.763203    1699 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/darwin/arm64/v1.30.3/kubectl
	
	
	* The control-plane node download-only-309000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-309000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-309000
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/json-events (13.7s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-260000 --force --alsologtostderr --kubernetes-version=v1.31.0-rc.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-260000 --force --alsologtostderr --kubernetes-version=v1.31.0-rc.0 --container-runtime=docker --driver=qemu2 : (13.694847084s)
--- PASS: TestDownloadOnly/v1.31.0-rc.0/json-events (13.70s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-260000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-260000: exit status 85 (74.3025ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-977000 | jenkins | v1.33.1 | 03 Aug 24 17:19 PDT |                     |
	|         | -p download-only-977000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.1 | 03 Aug 24 17:20 PDT | 03 Aug 24 17:20 PDT |
	| delete  | -p download-only-977000           | download-only-977000 | jenkins | v1.33.1 | 03 Aug 24 17:20 PDT | 03 Aug 24 17:20 PDT |
	| start   | -o=json --download-only           | download-only-309000 | jenkins | v1.33.1 | 03 Aug 24 17:20 PDT |                     |
	|         | -p download-only-309000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.1 | 03 Aug 24 17:20 PDT | 03 Aug 24 17:20 PDT |
	| delete  | -p download-only-309000           | download-only-309000 | jenkins | v1.33.1 | 03 Aug 24 17:20 PDT | 03 Aug 24 17:20 PDT |
	| start   | -o=json --download-only           | download-only-260000 | jenkins | v1.33.1 | 03 Aug 24 17:20 PDT |                     |
	|         | -p download-only-260000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0 |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/03 17:20:22
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0803 17:20:22.722608    1721 out.go:291] Setting OutFile to fd 1 ...
	I0803 17:20:22.722738    1721 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 17:20:22.722741    1721 out.go:304] Setting ErrFile to fd 2...
	I0803 17:20:22.722743    1721 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 17:20:22.722863    1721 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1166/.minikube/bin
	I0803 17:20:22.723907    1721 out.go:298] Setting JSON to true
	I0803 17:20:22.739943    1721 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1186,"bootTime":1722729636,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 17:20:22.740006    1721 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 17:20:22.744231    1721 out.go:97] [download-only-260000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0803 17:20:22.744308    1721 notify.go:220] Checking for updates...
	I0803 17:20:22.748372    1721 out.go:169] MINIKUBE_LOCATION=19364
	I0803 17:20:22.752385    1721 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	I0803 17:20:22.755378    1721 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 17:20:22.758350    1721 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 17:20:22.761401    1721 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	W0803 17:20:22.767357    1721 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0803 17:20:22.767535    1721 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 17:20:22.770339    1721 out.go:97] Using the qemu2 driver based on user configuration
	I0803 17:20:22.770349    1721 start.go:297] selected driver: qemu2
	I0803 17:20:22.770353    1721 start.go:901] validating driver "qemu2" against <nil>
	I0803 17:20:22.770405    1721 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0803 17:20:22.773268    1721 out.go:169] Automatically selected the socket_vmnet network
	I0803 17:20:22.778314    1721 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0803 17:20:22.778421    1721 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0803 17:20:22.778443    1721 cni.go:84] Creating CNI manager for ""
	I0803 17:20:22.778451    1721 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0803 17:20:22.778456    1721 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0803 17:20:22.778491    1721 start.go:340] cluster config:
	{Name:download-only-260000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:download-only-260000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 17:20:22.781633    1721 iso.go:125] acquiring lock: {Name:mk5fb439c48468a4ea1254631a9a137d32075713 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 17:20:22.788637    1721 out.go:97] Starting "download-only-260000" primary control-plane node in "download-only-260000" cluster
	I0803 17:20:22.788643    1721 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0803 17:20:22.840487    1721 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-rc.0/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4
	I0803 17:20:22.840499    1721 cache.go:56] Caching tarball of preloaded images
	I0803 17:20:22.840634    1721 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0803 17:20:22.845912    1721 out.go:97] Downloading Kubernetes v1.31.0-rc.0 preload ...
	I0803 17:20:22.845919    1721 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4 ...
	I0803 17:20:22.924683    1721 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-rc.0/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4?checksum=md5:c1f196b49f29ebea060b9249b6cb8e03 -> /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4
	I0803 17:20:28.448067    1721 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4 ...
	I0803 17:20:28.448237    1721 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4 ...
	I0803 17:20:28.970116    1721 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-rc.0 on docker
	I0803 17:20:28.970324    1721 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/download-only-260000/config.json ...
	I0803 17:20:28.970339    1721 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/download-only-260000/config.json: {Name:mk874def076512bb8c8ace2562cecac9ba211afb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 17:20:28.970583    1721 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0803 17:20:28.970703    1721 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0-rc.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19364-1166/.minikube/cache/darwin/arm64/v1.31.0-rc.0/kubectl
	
	
	* The control-plane node download-only-260000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-260000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-rc.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-rc.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-260000
--- PASS: TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestBinaryMirror (0.3s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-672000 --alsologtostderr --binary-mirror http://127.0.0.1:49325 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-672000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-672000
--- PASS: TestBinaryMirror (0.30s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-989000
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-989000: exit status 85 (56.415041ms)

                                                
                                                
-- stdout --
	* Profile "addons-989000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-989000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-989000
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-989000: exit status 85 (52.775875ms)

                                                
                                                
-- stdout --
	* Profile "addons-989000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-989000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (208.49s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-989000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-darwin-arm64 start -p addons-989000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: (3m28.493334584s)
--- PASS: TestAddons/Setup (208.49s)

                                                
                                    
x
+
TestAddons/serial/Volcano (38.93s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:897: volcano-scheduler stabilized in 8.96425ms
addons_test.go:913: volcano-controller stabilized in 9.003584ms
addons_test.go:905: volcano-admission stabilized in 9.0125ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-844f6db89b-85fhb" [da101386-0112-4b60-807a-3304ee984ba6] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.0039485s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5f7844f7bc-7bnls" [8b29cbfe-d98a-4bac-9dee-a78af8e57471] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003942542s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-59cb4746db-zb9nw" [398bc410-f5ec-4e40-9159-2e43ca1ca47e] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003655709s
addons_test.go:932: (dbg) Run:  kubectl --context addons-989000 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-989000 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-989000 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [f5a9f5d3-3149-4947-a190-40d0d678322c] Pending
helpers_test.go:344: "test-job-nginx-0" [f5a9f5d3-3149-4947-a190-40d0d678322c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [f5a9f5d3-3149-4947-a190-40d0d678322c] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.003828166s
addons_test.go:968: (dbg) Run:  out/minikube-darwin-arm64 -p addons-989000 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-darwin-arm64 -p addons-989000 addons disable volcano --alsologtostderr -v=1: (9.705163541s)
--- PASS: TestAddons/serial/Volcano (38.93s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.08s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-989000 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-989000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.08s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.06s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 1.2385ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-698f998955-pdc78" [8324b75f-c3f1-4d76-a420-8fe4294d5a90] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004080167s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-gzz9k" [fa86709f-c127-4975-aaf8-318ce2455349] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004187417s
addons_test.go:342: (dbg) Run:  kubectl --context addons-989000 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-989000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-989000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.775452291s)
addons_test.go:361: (dbg) Run:  out/minikube-darwin-arm64 -p addons-989000 ip
2024/08/03 17:25:16 [DEBUG] GET http://192.168.105.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 -p addons-989000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.06s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (18.75s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-989000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-989000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-989000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [45846c5c-533c-4da1-b165-4956b33e2773] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [45846c5c-533c-4da1-b165-4956b33e2773] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003739791s
addons_test.go:264: (dbg) Run:  out/minikube-darwin-arm64 -p addons-989000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-989000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-darwin-arm64 -p addons-989000 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.105.2
addons_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p addons-989000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:313: (dbg) Run:  out/minikube-darwin-arm64 -p addons-989000 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-darwin-arm64 -p addons-989000 addons disable ingress --alsologtostderr -v=1: (7.203068875s)
--- PASS: TestAddons/parallel/Ingress (18.75s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.22s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-vhrs9" [e112fb40-b3f1-4694-8fa9-5a5b5d46c299] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.002371708s
addons_test.go:851: (dbg) Run:  out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-989000
addons_test.go:851: (dbg) Done: out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-989000: (5.2180655s)
--- PASS: TestAddons/parallel/InspektorGadget (10.22s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 1.616625ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-f4rpj" [90c31fa6-baaf-443e-97c7-9019d1337aa6] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003242125s
addons_test.go:417: (dbg) Run:  kubectl --context addons-989000 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-darwin-arm64 -p addons-989000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.26s)

                                                
                                    
x
+
TestAddons/parallel/CSI (32.57s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 2.778208ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-989000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-989000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [dafbb2a7-d773-414a-813f-e96b4684e939] Pending
helpers_test.go:344: "task-pv-pod" [dafbb2a7-d773-414a-813f-e96b4684e939] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [dafbb2a7-d773-414a-813f-e96b4684e939] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.003817792s
addons_test.go:590: (dbg) Run:  kubectl --context addons-989000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-989000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-989000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-989000 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-989000 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-989000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-989000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [ad387850-97dd-4c27-ae2a-6ab0d3a27946] Pending
helpers_test.go:344: "task-pv-pod-restore" [ad387850-97dd-4c27-ae2a-6ab0d3a27946] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [ad387850-97dd-4c27-ae2a-6ab0d3a27946] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003944125s
addons_test.go:632: (dbg) Run:  kubectl --context addons-989000 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-989000 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-989000 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-darwin-arm64 -p addons-989000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-darwin-arm64 -p addons-989000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.097845s)
addons_test.go:648: (dbg) Run:  out/minikube-darwin-arm64 -p addons-989000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (32.57s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (10.41s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-989000 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-9d868696f-jxhqk" [dbb7f357-431d-4714-ad8e-09c207d8255d] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-9d868696f-jxhqk" [dbb7f357-431d-4714-ad8e-09c207d8255d] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.004273875s
addons_test.go:839: (dbg) Run:  out/minikube-darwin-arm64 -p addons-989000 addons disable headlamp --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Headlamp (10.41s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.17s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5455fb9b69-whj7s" [72b5b426-9a1b-486a-a59c-21f7eeb80686] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.0043935s
addons_test.go:870: (dbg) Run:  out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-989000
--- PASS: TestAddons/parallel/CloudSpanner (6.17s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (40.77s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-989000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-989000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-989000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [2111ecb5-4472-4912-b6de-95be18160e09] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [2111ecb5-4472-4912-b6de-95be18160e09] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [2111ecb5-4472-4912-b6de-95be18160e09] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003899125s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-989000 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-darwin-arm64 -p addons-989000 ssh "cat /opt/local-path-provisioner/pvc-0eb97910-18b7-4189-89e1-839fea274614_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-989000 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-989000 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-arm64 -p addons-989000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-darwin-arm64 -p addons-989000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (31.314137625s)
--- PASS: TestAddons/parallel/LocalPath (40.77s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.16s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-fdd94" [9a99b705-206e-4b5e-8936-96974eea5393] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004099125s
addons_test.go:1064: (dbg) Run:  out/minikube-darwin-arm64 addons disable nvidia-device-plugin -p addons-989000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.16s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.2s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-272f4" [99c59bc7-36ad-4327-b80d-f64625f90714] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.0035005s
addons_test.go:1076: (dbg) Run:  out/minikube-darwin-arm64 -p addons-989000 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-darwin-arm64 -p addons-989000 addons disable yakd --alsologtostderr -v=1: (5.197837875s)
--- PASS: TestAddons/parallel/Yakd (10.20s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.4s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-989000
addons_test.go:174: (dbg) Done: out/minikube-darwin-arm64 stop -p addons-989000: (12.206752166s)
addons_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-989000
addons_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-989000
addons_test.go:187: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-989000
--- PASS: TestAddons/StoppedEnableDisable (12.40s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.32s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (10.32s)

                                                
                                    
x
+
TestErrorSpam/setup (34.64s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-380000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-380000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-380000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-380000 --driver=qemu2 : (34.643934792s)
--- PASS: TestErrorSpam/setup (34.64s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-380000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-380000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-380000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-380000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-380000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-380000 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.24s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-380000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-380000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-380000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-380000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-380000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-380000 status
--- PASS: TestErrorSpam/status (0.24s)

                                                
                                    
x
+
TestErrorSpam/pause (0.63s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-380000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-380000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-380000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-380000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-380000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-380000 pause
--- PASS: TestErrorSpam/pause (0.63s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.57s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-380000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-380000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-380000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-380000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-380000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-380000 unpause
--- PASS: TestErrorSpam/unpause (0.57s)

                                                
                                    
x
+
TestErrorSpam/stop (64.26s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-380000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-380000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-380000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-380000 stop: (12.201782542s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-380000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-380000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-380000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-380000 stop: (26.027215958s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-380000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-380000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-380000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-380000 stop: (26.030727709s)
--- PASS: TestErrorSpam/stop (64.26s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/19364-1166/.minikube/files/etc/test/nested/copy/1673/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (57.16s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-959000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
E0803 17:29:05.817581    1673 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/addons-989000/client.crt: no such file or directory
E0803 17:29:05.824394    1673 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/addons-989000/client.crt: no such file or directory
E0803 17:29:05.836497    1673 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/addons-989000/client.crt: no such file or directory
E0803 17:29:05.858568    1673 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/addons-989000/client.crt: no such file or directory
E0803 17:29:05.900677    1673 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/addons-989000/client.crt: no such file or directory
E0803 17:29:05.982744    1673 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/addons-989000/client.crt: no such file or directory
E0803 17:29:06.144803    1673 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/addons-989000/client.crt: no such file or directory
E0803 17:29:06.466886    1673 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/addons-989000/client.crt: no such file or directory
E0803 17:29:07.109018    1673 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/addons-989000/client.crt: no such file or directory
E0803 17:29:08.391164    1673 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/addons-989000/client.crt: no such file or directory
E0803 17:29:10.953295    1673 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/addons-989000/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-darwin-arm64 start -p functional-959000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (57.154864166s)
--- PASS: TestFunctional/serial/StartWithProxy (57.16s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (56.22s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-959000 --alsologtostderr -v=8
E0803 17:29:16.075399    1673 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/addons-989000/client.crt: no such file or directory
E0803 17:29:26.317013    1673 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/addons-989000/client.crt: no such file or directory
E0803 17:29:46.798740    1673 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/addons-989000/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-darwin-arm64 start -p functional-959000 --alsologtostderr -v=8: (56.219047875s)
functional_test.go:659: soft start took 56.219423708s for "functional-959000" cluster.
--- PASS: TestFunctional/serial/SoftStart (56.22s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-959000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.8s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-959000 cache add registry.k8s.io/pause:3.1: (1.148004709s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.80s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-959000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialCacheCmdcacheadd_local2093497937/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 cache add minikube-local-cache-test:functional-959000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 cache delete minikube-local-cache-test:functional-959000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-959000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.63s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-959000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (66.469ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (0.63s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 kubectl -- --context functional-959000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.66s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.93s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-959000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.93s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (59.78s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-959000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0803 17:30:27.759868    1673 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/addons-989000/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-darwin-arm64 start -p functional-959000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (59.780254625s)
functional_test.go:757: restart took 59.780354333s for "functional-959000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (59.78s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-959000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.66s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 logs --file /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialLogsFileCmd4114374968/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.66s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.6s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-959000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-959000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-959000: exit status 115 (98.59525ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:32567 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-959000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.60s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-959000 config get cpus: exit status 14 (31.667666ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-959000 config get cpus: exit status 14 (31.088459ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-959000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-959000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2825: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.64s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-959000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-959000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (145.948083ms)

                                                
                                                
-- stdout --
	* [functional-959000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 17:32:09.276363    2790 out.go:291] Setting OutFile to fd 1 ...
	I0803 17:32:09.279048    2790 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 17:32:09.279057    2790 out.go:304] Setting ErrFile to fd 2...
	I0803 17:32:09.279060    2790 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 17:32:09.279205    2790 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1166/.minikube/bin
	I0803 17:32:09.280467    2790 out.go:298] Setting JSON to false
	I0803 17:32:09.299659    2790 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1893,"bootTime":1722729636,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 17:32:09.299729    2790 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 17:32:09.306980    2790 out.go:177] * [functional-959000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0803 17:32:09.318066    2790 notify.go:220] Checking for updates...
	I0803 17:32:09.321961    2790 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 17:32:09.324982    2790 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	I0803 17:32:09.327967    2790 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 17:32:09.330995    2790 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 17:32:09.338927    2790 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	I0803 17:32:09.346892    2790 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 17:32:09.351198    2790 config.go:182] Loaded profile config "functional-959000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 17:32:09.351488    2790 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 17:32:09.358960    2790 out.go:177] * Using the qemu2 driver based on existing profile
	I0803 17:32:09.369697    2790 start.go:297] selected driver: qemu2
	I0803 17:32:09.369715    2790 start.go:901] validating driver "qemu2" against &{Name:functional-959000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-959000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 17:32:09.369804    2790 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 17:32:09.377937    2790 out.go:177] 
	W0803 17:32:09.382033    2790 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0803 17:32:09.385858    2790 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-959000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-959000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-959000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (121.745791ms)

                                                
                                                
-- stdout --
	* [functional-959000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 17:32:09.548776    2804 out.go:291] Setting OutFile to fd 1 ...
	I0803 17:32:09.548909    2804 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 17:32:09.548912    2804 out.go:304] Setting ErrFile to fd 2...
	I0803 17:32:09.548915    2804 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 17:32:09.549047    2804 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1166/.minikube/bin
	I0803 17:32:09.550655    2804 out.go:298] Setting JSON to false
	I0803 17:32:09.571881    2804 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1893,"bootTime":1722729636,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 17:32:09.571982    2804 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 17:32:09.575968    2804 out.go:177] * [functional-959000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	I0803 17:32:09.583108    2804 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 17:32:09.583101    2804 notify.go:220] Checking for updates...
	I0803 17:32:09.589106    2804 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	I0803 17:32:09.591948    2804 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 17:32:09.594964    2804 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 17:32:09.597833    2804 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	I0803 17:32:09.604803    2804 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 17:32:09.608158    2804 config.go:182] Loaded profile config "functional-959000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 17:32:09.608406    2804 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 17:32:09.612979    2804 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0803 17:32:09.619955    2804 start.go:297] selected driver: qemu2
	I0803 17:32:09.619961    2804 start.go:901] validating driver "qemu2" against &{Name:functional-959000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-959000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 17:32:09.620017    2804 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 17:32:09.625974    2804 out.go:177] 
	W0803 17:32:09.629860    2804 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0803 17:32:09.632968    2804 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [ce999f5e-bd8b-4112-b809-49fab632a548] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003796541s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-959000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-959000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-959000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-959000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [1d7fe8c7-78dd-451d-a1e9-496fdc7aabf5] Pending
helpers_test.go:344: "sp-pod" [1d7fe8c7-78dd-451d-a1e9-496fdc7aabf5] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [1d7fe8c7-78dd-451d-a1e9-496fdc7aabf5] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.00446775s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-959000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-959000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-959000 delete -f testdata/storage-provisioner/pod.yaml: (1.040822791s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-959000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [ea704d57-3768-4165-a9c6-d47eab3b4c6b] Pending
helpers_test.go:344: "sp-pod" [ea704d57-3768-4165-a9c6-d47eab3b4c6b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [ea704d57-3768-4165-a9c6-d47eab3b4c6b] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003835167s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-959000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.46s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 ssh -n functional-959000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 cp functional-959000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd4261603645/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 ssh -n functional-959000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 ssh -n functional-959000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1673/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 ssh "sudo cat /etc/test/nested/copy/1673/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1673.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 ssh "sudo cat /etc/ssl/certs/1673.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1673.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 ssh "sudo cat /usr/share/ca-certificates/1673.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/16732.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 ssh "sudo cat /etc/ssl/certs/16732.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/16732.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 ssh "sudo cat /usr/share/ca-certificates/16732.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-959000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-959000 ssh "sudo systemctl is-active crio": exit status 1 (139.323458ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-959000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-959000
docker.io/kicbase/echo-server:functional-959000
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-959000 image ls --format short --alsologtostderr:
I0803 17:32:10.060594    2821 out.go:291] Setting OutFile to fd 1 ...
I0803 17:32:10.060746    2821 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0803 17:32:10.060750    2821 out.go:304] Setting ErrFile to fd 2...
I0803 17:32:10.060752    2821 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0803 17:32:10.060885    2821 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1166/.minikube/bin
I0803 17:32:10.061309    2821 config.go:182] Loaded profile config "functional-959000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0803 17:32:10.061370    2821 config.go:182] Loaded profile config "functional-959000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0803 17:32:10.062196    2821 ssh_runner.go:195] Run: systemctl --version
I0803 17:32:10.062205    2821 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/functional-959000/id_rsa Username:docker}
I0803 17:32:10.086180    2821 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-959000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/etcd                        | 3.5.12-0          | 014faa467e297 | 139MB  |
| registry.k8s.io/kube-controller-manager     | v1.30.3           | 8e97cdb19e7cc | 107MB  |
| docker.io/library/nginx                     | alpine            | d7cd33d7d4ed1 | 44.8MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| docker.io/library/nginx                     | latest            | 43b17fe33c4b4 | 193MB  |
| registry.k8s.io/pause                       | 3.9               | 829e9de338bd5 | 514kB  |
| registry.k8s.io/coredns/coredns             | v1.11.1           | 2437cf7621777 | 57.4MB |
| docker.io/kicbase/echo-server               | functional-959000 | ce2d2cda2d858 | 4.78MB |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| docker.io/localhost/my-image                | functional-959000 | 3edc632bc0dcb | 1.41MB |
| registry.k8s.io/kube-scheduler              | v1.30.3           | d48f992a22722 | 60.5MB |
| registry.k8s.io/kube-proxy                  | v1.30.3           | 2351f570ed0ea | 87.9MB |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| docker.io/library/minikube-local-cache-test | functional-959000 | 8dd5f7d41479b | 30B    |
| registry.k8s.io/kube-apiserver              | v1.30.3           | 61773190d42ff | 112MB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-959000 image ls --format table --alsologtostderr:
I0803 17:32:12.011182    2835 out.go:291] Setting OutFile to fd 1 ...
I0803 17:32:12.011338    2835 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0803 17:32:12.011342    2835 out.go:304] Setting ErrFile to fd 2...
I0803 17:32:12.011344    2835 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0803 17:32:12.011468    2835 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1166/.minikube/bin
I0803 17:32:12.011905    2835 config.go:182] Loaded profile config "functional-959000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0803 17:32:12.011971    2835 config.go:182] Loaded profile config "functional-959000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0803 17:32:12.012798    2835 ssh_runner.go:195] Run: systemctl --version
I0803 17:32:12.012809    2835 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/functional-959000/id_rsa Username:docker}
I0803 17:32:12.038918    2835 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
2024/08/03 17:32:18 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-959000 image ls --format json --alsologtostderr:
[{"id":"61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"112000000"},{"id":"d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.3"],"size":"60500000"},{"id":"2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"87900000"},{"id":"2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"57400000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"43b17fe33c4b4cf8de762123d33e02f2ed0c5e1178002f533d4
fb5df1e05fb76","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"193000000"},{"id":"d7cd33d7d4ed1cdef69594adc36fcc03a0aa45ba930d39a9286024d9b2322660","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"44800000"},{"id":"014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"139000000"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"514000"},{"id":"8dd5f7d41479b083dd5cc1da2b2b4a0b159b46add9d512b3d287b8e5220347ff","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-959000"],"size":"30"},{"id":"8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"107000000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-m
inikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"3edc632bc0dcb49f8e19c6c799fa77a2d3f5d67d5aa385a96a80d771eb469f2f","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-959000"],"size":"1410000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-959000"],"size":"4780000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-959000 image ls --format json --alsologtostderr:
I0803 17:32:11.932978    2833 out.go:291] Setting OutFile to fd 1 ...
I0803 17:32:11.933122    2833 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0803 17:32:11.933125    2833 out.go:304] Setting ErrFile to fd 2...
I0803 17:32:11.933127    2833 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0803 17:32:11.933267    2833 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1166/.minikube/bin
I0803 17:32:11.933696    2833 config.go:182] Loaded profile config "functional-959000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0803 17:32:11.933791    2833 config.go:182] Loaded profile config "functional-959000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0803 17:32:11.934593    2833 ssh_runner.go:195] Run: systemctl --version
I0803 17:32:11.934602    2833 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/functional-959000/id_rsa Username:docker}
I0803 17:32:11.958825    2833 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-959000 image ls --format yaml --alsologtostderr:
- id: 61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "112000000"
- id: d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "60500000"
- id: 2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "57400000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: d7cd33d7d4ed1cdef69594adc36fcc03a0aa45ba930d39a9286024d9b2322660
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "44800000"
- id: 014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "139000000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-959000
size: "4780000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 8dd5f7d41479b083dd5cc1da2b2b4a0b159b46add9d512b3d287b8e5220347ff
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-959000
size: "30"
- id: 8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "107000000"
- id: 2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "87900000"
- id: 43b17fe33c4b4cf8de762123d33e02f2ed0c5e1178002f533d4fb5df1e05fb76
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "514000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-959000 image ls --format yaml --alsologtostderr:
I0803 17:32:10.135767    2823 out.go:291] Setting OutFile to fd 1 ...
I0803 17:32:10.135934    2823 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0803 17:32:10.135943    2823 out.go:304] Setting ErrFile to fd 2...
I0803 17:32:10.135946    2823 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0803 17:32:10.136088    2823 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1166/.minikube/bin
I0803 17:32:10.136512    2823 config.go:182] Loaded profile config "functional-959000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0803 17:32:10.136578    2823 config.go:182] Loaded profile config "functional-959000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0803 17:32:10.137408    2823 ssh_runner.go:195] Run: systemctl --version
I0803 17:32:10.137417    2823 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/functional-959000/id_rsa Username:docker}
I0803 17:32:10.161422    2823 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-959000 ssh pgrep buildkitd: exit status 1 (57.444167ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 image build -t localhost/my-image:functional-959000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-arm64 -p functional-959000 image build -t localhost/my-image:functional-959000 testdata/build --alsologtostderr: (1.597429s)
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-959000 image build -t localhost/my-image:functional-959000 testdata/build --alsologtostderr:
I0803 17:32:10.269866    2828 out.go:291] Setting OutFile to fd 1 ...
I0803 17:32:10.270100    2828 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0803 17:32:10.270105    2828 out.go:304] Setting ErrFile to fd 2...
I0803 17:32:10.270107    2828 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0803 17:32:10.270249    2828 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1166/.minikube/bin
I0803 17:32:10.270675    2828 config.go:182] Loaded profile config "functional-959000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0803 17:32:10.271484    2828 config.go:182] Loaded profile config "functional-959000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0803 17:32:10.272322    2828 ssh_runner.go:195] Run: systemctl --version
I0803 17:32:10.272330    2828 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19364-1166/.minikube/machines/functional-959000/id_rsa Username:docker}
I0803 17:32:10.294925    2828 build_images.go:161] Building image from path: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.2084687177.tar
I0803 17:32:10.294995    2828 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0803 17:32:10.301269    2828 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2084687177.tar
I0803 17:32:10.302939    2828 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2084687177.tar: stat -c "%s %y" /var/lib/minikube/build/build.2084687177.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2084687177.tar': No such file or directory
I0803 17:32:10.302959    2828 ssh_runner.go:362] scp /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.2084687177.tar --> /var/lib/minikube/build/build.2084687177.tar (3072 bytes)
I0803 17:32:10.320863    2828 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2084687177
I0803 17:32:10.324586    2828 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2084687177 -xf /var/lib/minikube/build/build.2084687177.tar
I0803 17:32:10.328283    2828 docker.go:360] Building image: /var/lib/minikube/build/build.2084687177
I0803 17:32:10.328334    2828 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-959000 /var/lib/minikube/build/build.2084687177
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.9s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.3s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:3edc632bc0dcb49f8e19c6c799fa77a2d3f5d67d5aa385a96a80d771eb469f2f done
#8 naming to localhost/my-image:functional-959000 done
#8 DONE 0.0s
I0803 17:32:11.789960    2828 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-959000 /var/lib/minikube/build/build.2084687177: (1.46165225s)
I0803 17:32:11.790023    2828 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2084687177
I0803 17:32:11.794595    2828 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2084687177.tar
I0803 17:32:11.798264    2828 build_images.go:217] Built localhost/my-image:functional-959000 from /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.2084687177.tar
I0803 17:32:11.798279    2828 build_images.go:133] succeeded building to: functional-959000
I0803 17:32:11.798284    2828 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:341: (dbg) Done: docker pull docker.io/kicbase/echo-server:1.0: (1.747917s)
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-959000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.76s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-959000 docker-env) && out/minikube-darwin-arm64 status -p functional-959000"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-959000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (12.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-959000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-959000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-65f5d5cc78-bk9ck" [7c5e22a8-e526-4ff8-abc0-d6e3fa30d98c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-65f5d5cc78-bk9ck" [7c5e22a8-e526-4ff8-abc0-d6e3fa30d98c] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 12.003369042s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (12.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 image load --daemon docker.io/kicbase/echo-server:functional-959000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 image load --daemon docker.io/kicbase/echo-server:functional-959000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-959000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 image load --daemon docker.io/kicbase/echo-server:functional-959000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 image save docker.io/kicbase/echo-server:functional-959000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 image rm docker.io/kicbase/echo-server:functional-959000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-959000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 image save --daemon docker.io/kicbase/echo-server:functional-959000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-959000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-959000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-959000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-959000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2646: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-959000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-959000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (12.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-959000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [d84db70f-2e2f-4213-a089-44e0497ab657] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [d84db70f-2e2f-4213-a089-44e0497ab657] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 12.003595958s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (12.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 service list -o json
functional_test.go:1490: Took "79.970917ms" to run "out/minikube-darwin-arm64 -p functional-959000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.105.4:30384
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.105.4:30384
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-959000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.109.105.61 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-959000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1311: Took "82.576791ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1325: Took "33.720125ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1362: Took "84.135833ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1375: Took "34.528125ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-959000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port4082139386/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1722731522148578000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port4082139386/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1722731522148578000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port4082139386/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1722731522148578000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port4082139386/001/test-1722731522148578000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-959000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (58.50975ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug  4 00:32 created-by-test
-rw-r--r-- 1 docker docker 24 Aug  4 00:32 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug  4 00:32 test-1722731522148578000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 ssh cat /mount-9p/test-1722731522148578000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-959000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [3efdb5e6-cf76-40d3-9f49-8c47116da252] Pending
helpers_test.go:344: "busybox-mount" [3efdb5e6-cf76-40d3-9f49-8c47116da252] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [3efdb5e6-cf76-40d3-9f49-8c47116da252] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [3efdb5e6-cf76-40d3-9f49-8c47116da252] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.00394075s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-959000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-959000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port4082139386/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.27s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-959000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port1080122501/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-959000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (55.588958ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-959000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port1080122501/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-959000 ssh "sudo umount -f /mount-9p": exit status 1 (57.488ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-959000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-959000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port1080122501/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-959000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3207932887/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-959000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3207932887/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-959000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3207932887/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-959000 ssh "findmnt -T" /mount1: exit status 1 (82.035875ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-959000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-959000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-959000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3207932887/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-959000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3207932887/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-959000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3207932887/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (0.76s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-959000
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-959000
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-959000
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (208.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-960000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
E0803 17:34:05.808736    1673 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/addons-989000/client.crt: no such file or directory
E0803 17:34:33.514687    1673 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/addons-989000/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-darwin-arm64 start -p ha-960000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : (3m27.944742208s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 -p ha-960000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (208.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-960000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-960000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-arm64 kubectl -p ha-960000 -- rollout status deployment/busybox: (2.892448125s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-960000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-960000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-960000 -- exec busybox-fc5497c4f-474gs -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-960000 -- exec busybox-fc5497c4f-4vcxd -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-960000 -- exec busybox-fc5497c4f-j8gzr -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-960000 -- exec busybox-fc5497c4f-474gs -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-960000 -- exec busybox-fc5497c4f-4vcxd -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-960000 -- exec busybox-fc5497c4f-j8gzr -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-960000 -- exec busybox-fc5497c4f-474gs -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-960000 -- exec busybox-fc5497c4f-4vcxd -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-960000 -- exec busybox-fc5497c4f-j8gzr -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-960000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-960000 -- exec busybox-fc5497c4f-474gs -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-960000 -- exec busybox-fc5497c4f-474gs -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-960000 -- exec busybox-fc5497c4f-4vcxd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-960000 -- exec busybox-fc5497c4f-4vcxd -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-960000 -- exec busybox-fc5497c4f-j8gzr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-960000 -- exec busybox-fc5497c4f-j8gzr -- sh -c "ping -c 1 192.168.105.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (52.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-960000 -v=7 --alsologtostderr
E0803 17:36:23.638566    1673 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/functional-959000/client.crt: no such file or directory
E0803 17:36:23.644906    1673 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/functional-959000/client.crt: no such file or directory
E0803 17:36:23.656948    1673 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/functional-959000/client.crt: no such file or directory
E0803 17:36:23.678744    1673 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/functional-959000/client.crt: no such file or directory
E0803 17:36:23.720699    1673 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/functional-959000/client.crt: no such file or directory
E0803 17:36:23.801804    1673 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/functional-959000/client.crt: no such file or directory
E0803 17:36:23.963951    1673 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/functional-959000/client.crt: no such file or directory
E0803 17:36:24.284678    1673 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/functional-959000/client.crt: no such file or directory
E0803 17:36:24.926788    1673 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/functional-959000/client.crt: no such file or directory
E0803 17:36:26.208649    1673 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/functional-959000/client.crt: no such file or directory
E0803 17:36:28.768817    1673 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/functional-959000/client.crt: no such file or directory
E0803 17:36:33.890851    1673 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/functional-959000/client.crt: no such file or directory
E0803 17:36:44.132841    1673 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/functional-959000/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-darwin-arm64 node add -p ha-960000 -v=7 --alsologtostderr: (51.866187084s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-arm64 -p ha-960000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (52.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-960000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (4.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-960000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-960000 cp testdata/cp-test.txt ha-960000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-960000 ssh -n ha-960000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-960000 cp ha-960000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile206077144/001/cp-test_ha-960000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-960000 ssh -n ha-960000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-960000 cp ha-960000:/home/docker/cp-test.txt ha-960000-m02:/home/docker/cp-test_ha-960000_ha-960000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-960000 ssh -n ha-960000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-960000 ssh -n ha-960000-m02 "sudo cat /home/docker/cp-test_ha-960000_ha-960000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-960000 cp ha-960000:/home/docker/cp-test.txt ha-960000-m03:/home/docker/cp-test_ha-960000_ha-960000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-960000 ssh -n ha-960000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-960000 ssh -n ha-960000-m03 "sudo cat /home/docker/cp-test_ha-960000_ha-960000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-960000 cp ha-960000:/home/docker/cp-test.txt ha-960000-m04:/home/docker/cp-test_ha-960000_ha-960000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-960000 ssh -n ha-960000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-960000 ssh -n ha-960000-m04 "sudo cat /home/docker/cp-test_ha-960000_ha-960000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-960000 cp testdata/cp-test.txt ha-960000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-960000 ssh -n ha-960000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-960000 cp ha-960000-m02:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile206077144/001/cp-test_ha-960000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-960000 ssh -n ha-960000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-960000 cp ha-960000-m02:/home/docker/cp-test.txt ha-960000:/home/docker/cp-test_ha-960000-m02_ha-960000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-960000 ssh -n ha-960000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-960000 ssh -n ha-960000 "sudo cat /home/docker/cp-test_ha-960000-m02_ha-960000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-960000 cp ha-960000-m02:/home/docker/cp-test.txt ha-960000-m03:/home/docker/cp-test_ha-960000-m02_ha-960000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-960000 ssh -n ha-960000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-960000 ssh -n ha-960000-m03 "sudo cat /home/docker/cp-test_ha-960000-m02_ha-960000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-960000 cp ha-960000-m02:/home/docker/cp-test.txt ha-960000-m04:/home/docker/cp-test_ha-960000-m02_ha-960000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-960000 ssh -n ha-960000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-960000 ssh -n ha-960000-m04 "sudo cat /home/docker/cp-test_ha-960000-m02_ha-960000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-960000 cp testdata/cp-test.txt ha-960000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-960000 ssh -n ha-960000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-960000 cp ha-960000-m03:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile206077144/001/cp-test_ha-960000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-960000 ssh -n ha-960000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-960000 cp ha-960000-m03:/home/docker/cp-test.txt ha-960000:/home/docker/cp-test_ha-960000-m03_ha-960000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-960000 ssh -n ha-960000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-960000 ssh -n ha-960000 "sudo cat /home/docker/cp-test_ha-960000-m03_ha-960000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-960000 cp ha-960000-m03:/home/docker/cp-test.txt ha-960000-m02:/home/docker/cp-test_ha-960000-m03_ha-960000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-960000 ssh -n ha-960000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-960000 ssh -n ha-960000-m02 "sudo cat /home/docker/cp-test_ha-960000-m03_ha-960000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-960000 cp ha-960000-m03:/home/docker/cp-test.txt ha-960000-m04:/home/docker/cp-test_ha-960000-m03_ha-960000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-960000 ssh -n ha-960000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-960000 ssh -n ha-960000-m04 "sudo cat /home/docker/cp-test_ha-960000-m03_ha-960000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-960000 cp testdata/cp-test.txt ha-960000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-960000 ssh -n ha-960000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-960000 cp ha-960000-m04:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile206077144/001/cp-test_ha-960000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-960000 ssh -n ha-960000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-960000 cp ha-960000-m04:/home/docker/cp-test.txt ha-960000:/home/docker/cp-test_ha-960000-m04_ha-960000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-960000 ssh -n ha-960000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-960000 ssh -n ha-960000 "sudo cat /home/docker/cp-test_ha-960000-m04_ha-960000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-960000 cp ha-960000-m04:/home/docker/cp-test.txt ha-960000-m02:/home/docker/cp-test_ha-960000-m04_ha-960000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-960000 ssh -n ha-960000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-960000 ssh -n ha-960000-m02 "sudo cat /home/docker/cp-test_ha-960000-m04_ha-960000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-960000 cp ha-960000-m04:/home/docker/cp-test.txt ha-960000-m03:/home/docker/cp-test_ha-960000-m04_ha-960000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-960000 ssh -n ha-960000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-960000 ssh -n ha-960000-m03 "sudo cat /home/docker/cp-test_ha-960000-m04_ha-960000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (4.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (79.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E0803 17:46:23.609017    1673 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1166/.minikube/profiles/functional-959000/client.crt: no such file or directory
ha_test.go:281: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1m19.264301917s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (79.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.05s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (2.02s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-626000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-626000 --output=json --user=testUser: (2.016051292s)
--- PASS: TestJSONOutput/stop/Command (2.02s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-377000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-377000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (92.844334ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"26276333-9103-4a7f-828f-795978ef878d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-377000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0755c642-06be-4ba8-981c-bdcedc88624a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19364"}}
	{"specversion":"1.0","id":"790546fd-30d2-465d-970f-5ba08f624f30","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig"}}
	{"specversion":"1.0","id":"70c650f2-6218-49a5-bd32-f41db7b579e8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"3ded52b9-0c12-4c3d-bf7a-502a600263f4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5b2e521b-ccce-4057-bf33-86429291ad70","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube"}}
	{"specversion":"1.0","id":"aecb8b0a-ec0c-46dd-9996-0dac00d38950","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"eed100c9-f05c-4176-98c2-7df150791c7f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-377000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-377000
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.94s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-562000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-562000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (98.690584ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-562000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1166/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1166/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-562000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-562000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (44.537416ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-562000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-562000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.659571667s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.634676958s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-562000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-562000: (2.060505583s)
--- PASS: TestNoKubernetes/serial/Stop (2.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-562000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-562000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (40.606709ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-562000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-562000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.84s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-413000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.84s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (2.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-003000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-003000 --alsologtostderr -v=3: (2.0675495s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (2.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-003000 -n old-k8s-version-003000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-003000 -n old-k8s-version-003000: exit status 7 (49.431708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-003000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.55s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-883000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-883000 --alsologtostderr -v=3: (3.550480583s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.55s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-883000 -n embed-certs-883000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-883000 -n embed-certs-883000: exit status 7 (29.618917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-883000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (1.85s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-214000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-214000 --alsologtostderr -v=3: (1.852362042s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (1.85s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-214000 -n no-preload-214000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-214000 -n no-preload-214000: exit status 7 (54.979917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-214000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.44s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-432000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-432000 --alsologtostderr -v=3: (3.443653958s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-432000 -n default-k8s-diff-port-432000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-432000 -n default-k8s-diff-port-432000: exit status 7 (57.24325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-432000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-389000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-389000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-389000 --alsologtostderr -v=3: (3.148986917s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-389000 -n newest-cni-389000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-389000 -n newest-cni-389000: exit status 7 (63.70825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-389000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (23/282)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-289000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-289000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-289000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-289000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-289000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-289000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-289000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-289000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-289000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-289000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-289000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-289000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-289000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-289000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-289000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-289000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-289000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-289000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-289000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-289000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-289000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-289000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-289000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-289000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-289000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-289000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-289000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-289000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-289000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-289000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-289000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-289000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-289000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-289000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-289000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-289000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-289000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-289000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-289000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-289000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-289000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-289000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-289000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-289000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-289000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-289000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-289000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-289000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-289000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-289000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-289000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-289000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-289000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-289000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-289000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-289000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-289000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-289000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-289000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-289000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-289000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-289000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-289000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-289000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-289000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-289000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-289000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-289000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-289000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-289000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-289000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-289000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-289000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-289000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-289000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-289000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-289000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-289000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-289000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-289000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-289000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-289000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-289000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-289000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-289000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-289000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-289000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-289000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-289000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-289000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-289000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-289000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-289000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-289000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-289000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-289000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-289000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-289000"

                                                
                                                
----------------------- debugLogs end: cilium-289000 [took: 2.174321125s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-289000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-289000
--- SKIP: TestNetworkPlugins/group/cilium (2.28s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-148000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-148000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.13s)

                                                
                                    
Copied to clipboard