Test Report: QEMU_macOS 19339

                    
                      8887856610da967907ca11fca489a0af319d423c:2024-07-29:35555
                    
                

Test fail (156/266)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 15.58
7 TestDownloadOnly/v1.20.0/kubectl 0
31 TestOffline 10.08
36 TestAddons/Setup 10.61
37 TestCertOptions 10.29
38 TestCertExpiration 195.43
39 TestDockerFlags 10.36
40 TestForceSystemdFlag 10.2
41 TestForceSystemdEnv 10.15
47 TestErrorSpam/setup 9.81
56 TestFunctional/serial/StartWithProxy 9.84
58 TestFunctional/serial/SoftStart 5.27
59 TestFunctional/serial/KubeContext 0.06
60 TestFunctional/serial/KubectlGetPods 0.06
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.04
68 TestFunctional/serial/CacheCmd/cache/cache_reload 0.15
70 TestFunctional/serial/MinikubeKubectlCmd 0.74
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.97
72 TestFunctional/serial/ExtraConfig 5.26
73 TestFunctional/serial/ComponentHealth 0.06
74 TestFunctional/serial/LogsCmd 0.08
75 TestFunctional/serial/LogsFileCmd 0.07
76 TestFunctional/serial/InvalidService 0.03
79 TestFunctional/parallel/DashboardCmd 0.2
82 TestFunctional/parallel/StatusCmd 0.17
86 TestFunctional/parallel/ServiceCmdConnect 0.14
88 TestFunctional/parallel/PersistentVolumeClaim 0.03
90 TestFunctional/parallel/SSHCmd 0.14
91 TestFunctional/parallel/CpCmd 0.28
93 TestFunctional/parallel/FileSync 0.08
94 TestFunctional/parallel/CertSync 0.28
98 TestFunctional/parallel/NodeLabels 0.06
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.04
104 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.08
107 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0
108 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 100.44
109 TestFunctional/parallel/ServiceCmd/DeployApp 0.03
110 TestFunctional/parallel/ServiceCmd/List 0.04
111 TestFunctional/parallel/ServiceCmd/JSONOutput 0.04
112 TestFunctional/parallel/ServiceCmd/HTTPS 0.04
113 TestFunctional/parallel/ServiceCmd/Format 0.04
114 TestFunctional/parallel/ServiceCmd/URL 0.04
122 TestFunctional/parallel/Version/components 0.04
123 TestFunctional/parallel/ImageCommands/ImageListShort 0.03
124 TestFunctional/parallel/ImageCommands/ImageListTable 0.03
125 TestFunctional/parallel/ImageCommands/ImageListJson 0.03
126 TestFunctional/parallel/ImageCommands/ImageListYaml 0.03
127 TestFunctional/parallel/ImageCommands/ImageBuild 0.11
129 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.31
130 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.28
131 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.17
132 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.03
134 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.07
136 TestFunctional/parallel/DockerEnv/bash 0.04
137 TestFunctional/parallel/UpdateContextCmd/no_changes 0.04
138 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.04
139 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
140 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 15.06
142 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 37.45
150 TestMultiControlPlane/serial/StartCluster 9.88
151 TestMultiControlPlane/serial/DeployApp 115.04
152 TestMultiControlPlane/serial/PingHostFromPods 0.09
153 TestMultiControlPlane/serial/AddWorkerNode 0.07
154 TestMultiControlPlane/serial/NodeLabels 0.06
155 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.08
156 TestMultiControlPlane/serial/CopyFile 0.06
157 TestMultiControlPlane/serial/StopSecondaryNode 0.11
158 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.08
159 TestMultiControlPlane/serial/RestartSecondaryNode 48.46
160 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.08
161 TestMultiControlPlane/serial/RestartClusterKeepsNodes 7.36
162 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
163 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.08
164 TestMultiControlPlane/serial/StopCluster 3.93
165 TestMultiControlPlane/serial/RestartCluster 5.25
166 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.08
167 TestMultiControlPlane/serial/AddSecondaryNode 0.07
168 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.08
171 TestImageBuild/serial/Setup 10.01
174 TestJSONOutput/start/Command 9.91
180 TestJSONOutput/pause/Command 0.08
186 TestJSONOutput/unpause/Command 0.05
203 TestMinikubeProfile 10.16
206 TestMountStart/serial/StartWithMountFirst 10.06
209 TestMultiNode/serial/FreshStart2Nodes 9.87
210 TestMultiNode/serial/DeployApp2Nodes 96.57
211 TestMultiNode/serial/PingHostFrom2Pods 0.09
212 TestMultiNode/serial/AddNode 0.07
213 TestMultiNode/serial/MultiNodeLabels 0.06
214 TestMultiNode/serial/ProfileList 0.08
215 TestMultiNode/serial/CopyFile 0.06
216 TestMultiNode/serial/StopNode 0.14
217 TestMultiNode/serial/StartAfterStop 47.5
218 TestMultiNode/serial/RestartKeepsNodes 7.2
219 TestMultiNode/serial/DeleteNode 0.1
220 TestMultiNode/serial/StopMultiNode 4.12
221 TestMultiNode/serial/RestartMultiNode 5.25
222 TestMultiNode/serial/ValidateNameConflict 20.22
226 TestPreload 10.05
228 TestScheduledStopUnix 10.01
229 TestSkaffold 12.09
232 TestRunningBinaryUpgrade 586.59
234 TestKubernetesUpgrade 18.17
247 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.37
248 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.27
250 TestStoppedBinaryUpgrade/Upgrade 575.95
252 TestPause/serial/Start 9.91
262 TestNoKubernetes/serial/StartWithK8s 10.02
263 TestNoKubernetes/serial/StartWithStopK8s 5.35
264 TestNoKubernetes/serial/Start 5.31
268 TestNoKubernetes/serial/StartNoArgs 5.32
270 TestNetworkPlugins/group/auto/Start 9.83
271 TestNetworkPlugins/group/kindnet/Start 9.94
272 TestNetworkPlugins/group/calico/Start 9.73
273 TestNetworkPlugins/group/custom-flannel/Start 9.84
274 TestNetworkPlugins/group/false/Start 9.77
275 TestNetworkPlugins/group/enable-default-cni/Start 9.7
276 TestNetworkPlugins/group/flannel/Start 9.92
277 TestNetworkPlugins/group/bridge/Start 9.9
278 TestNetworkPlugins/group/kubenet/Start 9.78
280 TestStartStop/group/old-k8s-version/serial/FirstStart 9.91
282 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
283 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.13
286 TestStartStop/group/old-k8s-version/serial/SecondStart 5.2
287 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
288 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
289 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
290 TestStartStop/group/old-k8s-version/serial/Pause 0.1
292 TestStartStop/group/no-preload/serial/FirstStart 9.95
293 TestStartStop/group/no-preload/serial/DeployApp 0.09
294 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.12
297 TestStartStop/group/no-preload/serial/SecondStart 5.24
298 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
299 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
300 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
301 TestStartStop/group/no-preload/serial/Pause 0.1
303 TestStartStop/group/embed-certs/serial/FirstStart 9.92
305 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.95
306 TestStartStop/group/embed-certs/serial/DeployApp 0.09
307 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.11
310 TestStartStop/group/embed-certs/serial/SecondStart 5.81
311 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
312 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
315 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.26
316 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
317 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
318 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
319 TestStartStop/group/embed-certs/serial/Pause 0.1
321 TestStartStop/group/newest-cni/serial/FirstStart 9.88
322 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
323 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
324 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
325 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
330 TestStartStop/group/newest-cni/serial/SecondStart 5.25
333 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
334 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (15.58s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-403000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-403000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (15.577668584s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"af113277-08fe-4b28-b9b8-a1713b97da21","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-403000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"20a31bb6-ff6c-4b02-a9de-c89851a5eec4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19339"}}
	{"specversion":"1.0","id":"74a2239c-7670-4f71-b4d4-427d6c6e17d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig"}}
	{"specversion":"1.0","id":"abcac549-3cc6-4f10-9a58-7d1a7552bdba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"5a291f6e-2226-43df-a553-4b205cc60c1a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"35e65416-8438-468f-bf47-2fdeb0be5ac6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube"}}
	{"specversion":"1.0","id":"6de6dba5-b8f5-4e60-bdda-8457f5be9dd6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"0b0f724e-4abe-42b4-8510-22af46397cc1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"ec891fc4-5169-4eda-afad-ad07e82a0f14","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"b2d831f8-1430-438a-a039-fa3e79bc0d77","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"86266752-6a48-4556-9f2c-f83307d19169","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-403000\" primary control-plane node in \"download-only-403000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"82b4a740-0f5e-4f59-94f3-d45d20a60ae5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"efc1be0c-efa8-49ad-873a-cb5f75015c9e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19339-6071/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x109081a60 0x109081a60 0x109081a60 0x109081a60 0x109081a60 0x109081a60 0x109081a60] Decompressors:map[bz2:0x1400089b270 gz:0x1400089b278 tar:0x1400089b220 tar.bz2:0x1400089b230 tar.gz:0x1400089b240 tar.xz:0x1400089b250 tar.zst:0x1400089b260 tbz2:0x1400089b230 tgz:0x14
00089b240 txz:0x1400089b250 tzst:0x1400089b260 xz:0x1400089b280 zip:0x1400089b290 zst:0x1400089b288] Getters:map[file:0x14000985970 http:0x140006dc640 https:0x140006dc690] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"d3bf4970-43e9-4d64-9729-2f6a31919572","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:34:58.122367    6545 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:34:58.122491    6545 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:34:58.122495    6545 out.go:304] Setting ErrFile to fd 2...
	I0729 10:34:58.122498    6545 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:34:58.122612    6545 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	W0729 10:34:58.122700    6545 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19339-6071/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19339-6071/.minikube/config/config.json: no such file or directory
	I0729 10:34:58.124059    6545 out.go:298] Setting JSON to true
	I0729 10:34:58.141785    6545 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3867,"bootTime":1722270631,"procs":454,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 10:34:58.141859    6545 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 10:34:58.147365    6545 out.go:97] [download-only-403000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 10:34:58.147490    6545 notify.go:220] Checking for updates...
	W0729 10:34:58.147508    6545 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball: no such file or directory
	I0729 10:34:58.151265    6545 out.go:169] MINIKUBE_LOCATION=19339
	I0729 10:34:58.154319    6545 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	I0729 10:34:58.156654    6545 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 10:34:58.160304    6545 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:34:58.176387    6545 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	W0729 10:34:58.182294    6545 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0729 10:34:58.182554    6545 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:34:58.186317    6545 out.go:97] Using the qemu2 driver based on user configuration
	I0729 10:34:58.186336    6545 start.go:297] selected driver: qemu2
	I0729 10:34:58.186351    6545 start.go:901] validating driver "qemu2" against <nil>
	I0729 10:34:58.186417    6545 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 10:34:58.189996    6545 out.go:169] Automatically selected the socket_vmnet network
	I0729 10:34:58.195753    6545 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0729 10:34:58.195853    6545 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 10:34:58.195917    6545 cni.go:84] Creating CNI manager for ""
	I0729 10:34:58.195941    6545 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0729 10:34:58.196004    6545 start.go:340] cluster config:
	{Name:download-only-403000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-403000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:34:58.200022    6545 iso.go:125] acquiring lock: {Name:mk2808e0b9510c77af2c0862d3450f3cc996acba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:34:58.204857    6545 out.go:97] Downloading VM boot image ...
	I0729 10:34:58.204878    6545 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso
	I0729 10:35:05.490823    6545 out.go:97] Starting "download-only-403000" primary control-plane node in "download-only-403000" cluster
	I0729 10:35:05.490849    6545 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 10:35:05.549352    6545 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0729 10:35:05.549359    6545 cache.go:56] Caching tarball of preloaded images
	I0729 10:35:05.550219    6545 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 10:35:05.556879    6545 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0729 10:35:05.556885    6545 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0729 10:35:05.640983    6545 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0729 10:35:12.342605    6545 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0729 10:35:12.342761    6545 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0729 10:35:13.038822    6545 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0729 10:35:13.039006    6545 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/download-only-403000/config.json ...
	I0729 10:35:13.039024    6545 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/download-only-403000/config.json: {Name:mkb9ca26ad1005982ac978eb61746b6b0a1304c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:35:13.039259    6545 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 10:35:13.039452    6545 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0729 10:35:13.623573    6545 out.go:169] 
	W0729 10:35:13.628553    6545 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19339-6071/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x109081a60 0x109081a60 0x109081a60 0x109081a60 0x109081a60 0x109081a60 0x109081a60] Decompressors:map[bz2:0x1400089b270 gz:0x1400089b278 tar:0x1400089b220 tar.bz2:0x1400089b230 tar.gz:0x1400089b240 tar.xz:0x1400089b250 tar.zst:0x1400089b260 tbz2:0x1400089b230 tgz:0x1400089b240 txz:0x1400089b250 tzst:0x1400089b260 xz:0x1400089b280 zip:0x1400089b290 zst:0x1400089b288] Getters:map[file:0x14000985970 http:0x140006dc640 https:0x140006dc690] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0729 10:35:13.628580    6545 out_reason.go:110] 
	W0729 10:35:13.636560    6545 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 10:35:13.640521    6545 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-403000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (15.58s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19339-6071/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10.08s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-984000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-984000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.929026792s)

                                                
                                                
-- stdout --
	* [offline-docker-984000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19339
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-984000" primary control-plane node in "offline-docker-984000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-984000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:46:53.922273    7949 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:46:53.922401    7949 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:46:53.922404    7949 out.go:304] Setting ErrFile to fd 2...
	I0729 10:46:53.922407    7949 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:46:53.922553    7949 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 10:46:53.923837    7949 out.go:298] Setting JSON to false
	I0729 10:46:53.941464    7949 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4582,"bootTime":1722270631,"procs":454,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 10:46:53.941600    7949 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 10:46:53.945861    7949 out.go:177] * [offline-docker-984000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 10:46:53.953863    7949 out.go:177]   - MINIKUBE_LOCATION=19339
	I0729 10:46:53.953871    7949 notify.go:220] Checking for updates...
	I0729 10:46:53.959736    7949 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	I0729 10:46:53.962787    7949 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 10:46:53.965825    7949 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:46:53.967025    7949 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	I0729 10:46:53.969748    7949 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 10:46:53.973168    7949 config.go:182] Loaded profile config "multinode-263000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:46:53.973222    7949 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:46:53.976579    7949 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 10:46:53.983819    7949 start.go:297] selected driver: qemu2
	I0729 10:46:53.983832    7949 start.go:901] validating driver "qemu2" against <nil>
	I0729 10:46:53.983840    7949 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:46:53.985910    7949 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 10:46:53.988772    7949 out.go:177] * Automatically selected the socket_vmnet network
	I0729 10:46:53.991949    7949 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 10:46:53.991964    7949 cni.go:84] Creating CNI manager for ""
	I0729 10:46:53.991972    7949 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 10:46:53.991974    7949 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 10:46:53.992008    7949 start.go:340] cluster config:
	{Name:offline-docker-984000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-984000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:46:53.995763    7949 iso.go:125] acquiring lock: {Name:mk2808e0b9510c77af2c0862d3450f3cc996acba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:46:53.999803    7949 out.go:177] * Starting "offline-docker-984000" primary control-plane node in "offline-docker-984000" cluster
	I0729 10:46:54.007783    7949 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 10:46:54.007817    7949 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 10:46:54.007829    7949 cache.go:56] Caching tarball of preloaded images
	I0729 10:46:54.007915    7949 preload.go:172] Found /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 10:46:54.007921    7949 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 10:46:54.007992    7949 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/offline-docker-984000/config.json ...
	I0729 10:46:54.008004    7949 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/offline-docker-984000/config.json: {Name:mk5495d5917b3cce678e9539fbf0f2d63689fa46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:46:54.008235    7949 start.go:360] acquireMachinesLock for offline-docker-984000: {Name:mk9e50f97d2386769ca02400541d9347efa17b63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:46:54.008273    7949 start.go:364] duration metric: took 26.917µs to acquireMachinesLock for "offline-docker-984000"
	I0729 10:46:54.008285    7949 start.go:93] Provisioning new machine with config: &{Name:offline-docker-984000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-984000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:46:54.008325    7949 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 10:46:54.015862    7949 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 10:46:54.031927    7949 start.go:159] libmachine.API.Create for "offline-docker-984000" (driver="qemu2")
	I0729 10:46:54.031966    7949 client.go:168] LocalClient.Create starting
	I0729 10:46:54.032046    7949 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/ca.pem
	I0729 10:46:54.032079    7949 main.go:141] libmachine: Decoding PEM data...
	I0729 10:46:54.032090    7949 main.go:141] libmachine: Parsing certificate...
	I0729 10:46:54.032144    7949 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/cert.pem
	I0729 10:46:54.032168    7949 main.go:141] libmachine: Decoding PEM data...
	I0729 10:46:54.032180    7949 main.go:141] libmachine: Parsing certificate...
	I0729 10:46:54.032555    7949 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19339-6071/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0729 10:46:54.182550    7949 main.go:141] libmachine: Creating SSH key...
	I0729 10:46:54.297337    7949 main.go:141] libmachine: Creating Disk image...
	I0729 10:46:54.297345    7949 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 10:46:54.300753    7949 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/offline-docker-984000/disk.qcow2.raw /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/offline-docker-984000/disk.qcow2
	I0729 10:46:54.310458    7949 main.go:141] libmachine: STDOUT: 
	I0729 10:46:54.310478    7949 main.go:141] libmachine: STDERR: 
	I0729 10:46:54.310529    7949 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/offline-docker-984000/disk.qcow2 +20000M
	I0729 10:46:54.318984    7949 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 10:46:54.319009    7949 main.go:141] libmachine: STDERR: 
	I0729 10:46:54.319027    7949 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/offline-docker-984000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/offline-docker-984000/disk.qcow2
	I0729 10:46:54.319031    7949 main.go:141] libmachine: Starting QEMU VM...
	I0729 10:46:54.319047    7949 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:46:54.319077    7949 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/offline-docker-984000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/offline-docker-984000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/offline-docker-984000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:2b:b5:52:31:34 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/offline-docker-984000/disk.qcow2
	I0729 10:46:54.321152    7949 main.go:141] libmachine: STDOUT: 
	I0729 10:46:54.321165    7949 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:46:54.321189    7949 client.go:171] duration metric: took 289.223ms to LocalClient.Create
	I0729 10:46:56.323227    7949 start.go:128] duration metric: took 2.314934s to createHost
	I0729 10:46:56.323255    7949 start.go:83] releasing machines lock for "offline-docker-984000", held for 2.315016208s
	W0729 10:46:56.323269    7949 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:46:56.332367    7949 out.go:177] * Deleting "offline-docker-984000" in qemu2 ...
	W0729 10:46:56.344953    7949 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:46:56.344963    7949 start.go:729] Will try again in 5 seconds ...
	I0729 10:47:01.347174    7949 start.go:360] acquireMachinesLock for offline-docker-984000: {Name:mk9e50f97d2386769ca02400541d9347efa17b63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:47:01.347604    7949 start.go:364] duration metric: took 310.208µs to acquireMachinesLock for "offline-docker-984000"
	I0729 10:47:01.347834    7949 start.go:93] Provisioning new machine with config: &{Name:offline-docker-984000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-984000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:47:01.348066    7949 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 10:47:01.365439    7949 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 10:47:01.414805    7949 start.go:159] libmachine.API.Create for "offline-docker-984000" (driver="qemu2")
	I0729 10:47:01.414887    7949 client.go:168] LocalClient.Create starting
	I0729 10:47:01.415047    7949 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/ca.pem
	I0729 10:47:01.415119    7949 main.go:141] libmachine: Decoding PEM data...
	I0729 10:47:01.415136    7949 main.go:141] libmachine: Parsing certificate...
	I0729 10:47:01.415252    7949 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/cert.pem
	I0729 10:47:01.415314    7949 main.go:141] libmachine: Decoding PEM data...
	I0729 10:47:01.415328    7949 main.go:141] libmachine: Parsing certificate...
	I0729 10:47:01.415857    7949 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19339-6071/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0729 10:47:01.573646    7949 main.go:141] libmachine: Creating SSH key...
	I0729 10:47:01.755102    7949 main.go:141] libmachine: Creating Disk image...
	I0729 10:47:01.755111    7949 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 10:47:01.755351    7949 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/offline-docker-984000/disk.qcow2.raw /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/offline-docker-984000/disk.qcow2
	I0729 10:47:01.764737    7949 main.go:141] libmachine: STDOUT: 
	I0729 10:47:01.764759    7949 main.go:141] libmachine: STDERR: 
	I0729 10:47:01.764827    7949 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/offline-docker-984000/disk.qcow2 +20000M
	I0729 10:47:01.772913    7949 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 10:47:01.772931    7949 main.go:141] libmachine: STDERR: 
	I0729 10:47:01.772941    7949 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/offline-docker-984000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/offline-docker-984000/disk.qcow2
	I0729 10:47:01.772945    7949 main.go:141] libmachine: Starting QEMU VM...
	I0729 10:47:01.772963    7949 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:47:01.772984    7949 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/offline-docker-984000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/offline-docker-984000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/offline-docker-984000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:4b:c7:9f:1e:c2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/offline-docker-984000/disk.qcow2
	I0729 10:47:01.774594    7949 main.go:141] libmachine: STDOUT: 
	I0729 10:47:01.774617    7949 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:47:01.774629    7949 client.go:171] duration metric: took 359.729042ms to LocalClient.Create
	I0729 10:47:03.776850    7949 start.go:128] duration metric: took 2.428773209s to createHost
	I0729 10:47:03.776968    7949 start.go:83] releasing machines lock for "offline-docker-984000", held for 2.429340875s
	W0729 10:47:03.777244    7949 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-984000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-984000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:47:03.789994    7949 out.go:177] 
	W0729 10:47:03.794066    7949 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 10:47:03.794094    7949 out.go:239] * 
	* 
	W0729 10:47:03.797266    7949 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 10:47:03.807935    7949 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-984000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-07-29 10:47:03.822512 -0700 PDT m=+725.850113667
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-984000 -n offline-docker-984000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-984000 -n offline-docker-984000: exit status 7 (67.03375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-984000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-984000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-984000
--- FAIL: TestOffline (10.08s)

                                                
                                    
x
+
TestAddons/Setup (10.61s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-166000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-166000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: exit status 80 (10.603740833s)

                                                
                                                
-- stdout --
	* [addons-166000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19339
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "addons-166000" primary control-plane node in "addons-166000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "addons-166000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:35:35.717332    6651 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:35:35.717450    6651 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:35:35.717453    6651 out.go:304] Setting ErrFile to fd 2...
	I0729 10:35:35.717466    6651 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:35:35.717603    6651 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 10:35:35.718660    6651 out.go:298] Setting JSON to false
	I0729 10:35:35.734888    6651 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3904,"bootTime":1722270631,"procs":448,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 10:35:35.734955    6651 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 10:35:35.738992    6651 out.go:177] * [addons-166000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 10:35:35.746130    6651 out.go:177]   - MINIKUBE_LOCATION=19339
	I0729 10:35:35.746167    6651 notify.go:220] Checking for updates...
	I0729 10:35:35.751996    6651 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	I0729 10:35:35.755012    6651 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 10:35:35.756442    6651 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:35:35.760021    6651 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	I0729 10:35:35.762996    6651 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 10:35:35.766151    6651 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:35:35.769963    6651 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 10:35:35.777009    6651 start.go:297] selected driver: qemu2
	I0729 10:35:35.777019    6651 start.go:901] validating driver "qemu2" against <nil>
	I0729 10:35:35.777027    6651 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:35:35.779260    6651 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 10:35:35.782007    6651 out.go:177] * Automatically selected the socket_vmnet network
	I0729 10:35:35.785129    6651 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 10:35:35.785148    6651 cni.go:84] Creating CNI manager for ""
	I0729 10:35:35.785156    6651 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 10:35:35.785160    6651 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 10:35:35.785186    6651 start.go:340] cluster config:
	{Name:addons-166000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-166000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:35:35.788801    6651 iso.go:125] acquiring lock: {Name:mk2808e0b9510c77af2c0862d3450f3cc996acba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:35:35.797007    6651 out.go:177] * Starting "addons-166000" primary control-plane node in "addons-166000" cluster
	I0729 10:35:35.801052    6651 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 10:35:35.801069    6651 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 10:35:35.801078    6651 cache.go:56] Caching tarball of preloaded images
	I0729 10:35:35.801151    6651 preload.go:172] Found /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 10:35:35.801162    6651 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 10:35:35.801345    6651 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/addons-166000/config.json ...
	I0729 10:35:35.801356    6651 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/addons-166000/config.json: {Name:mk3e90e4d1dd8b83cba93ee483073973f4c1e7e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:35:35.801741    6651 start.go:360] acquireMachinesLock for addons-166000: {Name:mk9e50f97d2386769ca02400541d9347efa17b63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:35:35.801819    6651 start.go:364] duration metric: took 71.792µs to acquireMachinesLock for "addons-166000"
	I0729 10:35:35.801831    6651 start.go:93] Provisioning new machine with config: &{Name:addons-166000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:addons-166000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:35:35.801874    6651 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 10:35:35.811009    6651 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0729 10:35:35.828693    6651 start.go:159] libmachine.API.Create for "addons-166000" (driver="qemu2")
	I0729 10:35:35.828715    6651 client.go:168] LocalClient.Create starting
	I0729 10:35:35.828856    6651 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/ca.pem
	I0729 10:35:36.066060    6651 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/cert.pem
	I0729 10:35:36.203892    6651 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19339-6071/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0729 10:35:36.530655    6651 main.go:141] libmachine: Creating SSH key...
	I0729 10:35:36.737199    6651 main.go:141] libmachine: Creating Disk image...
	I0729 10:35:36.737211    6651 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 10:35:36.737446    6651 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/addons-166000/disk.qcow2.raw /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/addons-166000/disk.qcow2
	I0729 10:35:36.747108    6651 main.go:141] libmachine: STDOUT: 
	I0729 10:35:36.747140    6651 main.go:141] libmachine: STDERR: 
	I0729 10:35:36.747200    6651 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/addons-166000/disk.qcow2 +20000M
	I0729 10:35:36.755128    6651 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 10:35:36.755143    6651 main.go:141] libmachine: STDERR: 
	I0729 10:35:36.755159    6651 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/addons-166000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/addons-166000/disk.qcow2
	I0729 10:35:36.755166    6651 main.go:141] libmachine: Starting QEMU VM...
	I0729 10:35:36.755194    6651 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:35:36.755219    6651 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/addons-166000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/addons-166000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/addons-166000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:33:c6:c6:9f:20 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/addons-166000/disk.qcow2
	I0729 10:35:36.756884    6651 main.go:141] libmachine: STDOUT: 
	I0729 10:35:36.756898    6651 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:35:36.756919    6651 client.go:171] duration metric: took 928.196917ms to LocalClient.Create
	I0729 10:35:38.759123    6651 start.go:128] duration metric: took 2.957226833s to createHost
	I0729 10:35:38.759176    6651 start.go:83] releasing machines lock for "addons-166000", held for 2.957345459s
	W0729 10:35:38.759237    6651 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:35:38.770406    6651 out.go:177] * Deleting "addons-166000" in qemu2 ...
	W0729 10:35:38.800244    6651 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:35:38.800273    6651 start.go:729] Will try again in 5 seconds ...
	I0729 10:35:43.802533    6651 start.go:360] acquireMachinesLock for addons-166000: {Name:mk9e50f97d2386769ca02400541d9347efa17b63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:35:43.802993    6651 start.go:364] duration metric: took 364.666µs to acquireMachinesLock for "addons-166000"
	I0729 10:35:43.803129    6651 start.go:93] Provisioning new machine with config: &{Name:addons-166000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:addons-166000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:35:43.803428    6651 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 10:35:43.817091    6651 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0729 10:35:43.867929    6651 start.go:159] libmachine.API.Create for "addons-166000" (driver="qemu2")
	I0729 10:35:43.867968    6651 client.go:168] LocalClient.Create starting
	I0729 10:35:43.868077    6651 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/ca.pem
	I0729 10:35:43.868133    6651 main.go:141] libmachine: Decoding PEM data...
	I0729 10:35:43.868154    6651 main.go:141] libmachine: Parsing certificate...
	I0729 10:35:43.868237    6651 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/cert.pem
	I0729 10:35:43.868288    6651 main.go:141] libmachine: Decoding PEM data...
	I0729 10:35:43.868301    6651 main.go:141] libmachine: Parsing certificate...
	I0729 10:35:43.869013    6651 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19339-6071/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0729 10:35:44.053760    6651 main.go:141] libmachine: Creating SSH key...
	I0729 10:35:44.231684    6651 main.go:141] libmachine: Creating Disk image...
	I0729 10:35:44.231690    6651 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 10:35:44.231899    6651 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/addons-166000/disk.qcow2.raw /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/addons-166000/disk.qcow2
	I0729 10:35:44.241523    6651 main.go:141] libmachine: STDOUT: 
	I0729 10:35:44.241553    6651 main.go:141] libmachine: STDERR: 
	I0729 10:35:44.241622    6651 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/addons-166000/disk.qcow2 +20000M
	I0729 10:35:44.249459    6651 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 10:35:44.249476    6651 main.go:141] libmachine: STDERR: 
	I0729 10:35:44.249494    6651 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/addons-166000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/addons-166000/disk.qcow2
	I0729 10:35:44.249497    6651 main.go:141] libmachine: Starting QEMU VM...
	I0729 10:35:44.249509    6651 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:35:44.249543    6651 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/addons-166000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/addons-166000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/addons-166000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:5d:94:25:57:a8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/addons-166000/disk.qcow2
	I0729 10:35:44.251213    6651 main.go:141] libmachine: STDOUT: 
	I0729 10:35:44.251226    6651 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:35:44.251242    6651 client.go:171] duration metric: took 383.267417ms to LocalClient.Create
	I0729 10:35:46.252640    6651 start.go:128] duration metric: took 2.449166709s to createHost
	I0729 10:35:46.252731    6651 start.go:83] releasing machines lock for "addons-166000", held for 2.449671417s
	W0729 10:35:46.253223    6651 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p addons-166000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p addons-166000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:35:46.261748    6651 out.go:177] 
	W0729 10:35:46.267881    6651 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 10:35:46.267905    6651 out.go:239] * 
	* 
	W0729 10:35:46.271042    6651 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 10:35:46.278527    6651 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:112: out/minikube-darwin-arm64 start -p addons-166000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns failed: exit status 80
--- FAIL: TestAddons/Setup (10.61s)

                                                
                                    
x
+
TestCertOptions (10.29s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-952000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-952000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (10.034341333s)

                                                
                                                
-- stdout --
	* [cert-options-952000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19339
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-952000" primary control-plane node in "cert-options-952000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-952000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-952000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-952000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-952000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-952000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (76.828916ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-952000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-952000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-952000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-952000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-952000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-952000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (40.04625ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-952000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-952000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-952000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-952000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-952000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-07-29 10:47:34.658614 -0700 PDT m=+756.686734792
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-952000 -n cert-options-952000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-952000 -n cert-options-952000: exit status 7 (29.875958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-952000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-952000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-952000
--- FAIL: TestCertOptions (10.29s)

                                                
                                    
x
+
TestCertExpiration (195.43s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-864000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-864000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (10.047195291s)

                                                
                                                
-- stdout --
	* [cert-expiration-864000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19339
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-864000" primary control-plane node in "cert-expiration-864000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-864000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-864000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-864000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-864000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-864000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.232550333s)

                                                
                                                
-- stdout --
	* [cert-expiration-864000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19339
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-864000" primary control-plane node in "cert-expiration-864000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-864000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-864000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-864000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-864000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-864000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19339
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-864000" primary control-plane node in "cert-expiration-864000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-864000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-864000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-864000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-07-29 10:50:34.552323 -0700 PDT m=+936.583475667
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-864000 -n cert-expiration-864000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-864000 -n cert-expiration-864000: exit status 7 (67.317834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-864000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-864000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-864000
--- FAIL: TestCertExpiration (195.43s)

                                                
                                    
x
+
TestDockerFlags (10.36s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-400000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-400000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.123200667s)

                                                
                                                
-- stdout --
	* [docker-flags-400000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19339
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-400000" primary control-plane node in "docker-flags-400000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-400000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:47:14.144737    8138 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:47:14.144863    8138 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:47:14.144865    8138 out.go:304] Setting ErrFile to fd 2...
	I0729 10:47:14.144868    8138 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:47:14.144994    8138 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 10:47:14.146076    8138 out.go:298] Setting JSON to false
	I0729 10:47:14.161986    8138 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4603,"bootTime":1722270631,"procs":449,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 10:47:14.162053    8138 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 10:47:14.167329    8138 out.go:177] * [docker-flags-400000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 10:47:14.175199    8138 out.go:177]   - MINIKUBE_LOCATION=19339
	I0729 10:47:14.175241    8138 notify.go:220] Checking for updates...
	I0729 10:47:14.182173    8138 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	I0729 10:47:14.185204    8138 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 10:47:14.189183    8138 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:47:14.192197    8138 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	I0729 10:47:14.195236    8138 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 10:47:14.198510    8138 config.go:182] Loaded profile config "force-systemd-flag-917000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:47:14.198577    8138 config.go:182] Loaded profile config "multinode-263000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:47:14.198625    8138 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:47:14.202182    8138 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 10:47:14.209251    8138 start.go:297] selected driver: qemu2
	I0729 10:47:14.209261    8138 start.go:901] validating driver "qemu2" against <nil>
	I0729 10:47:14.209270    8138 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:47:14.211543    8138 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 10:47:14.215187    8138 out.go:177] * Automatically selected the socket_vmnet network
	I0729 10:47:14.218237    8138 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0729 10:47:14.218270    8138 cni.go:84] Creating CNI manager for ""
	I0729 10:47:14.218277    8138 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 10:47:14.218281    8138 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 10:47:14.218306    8138 start.go:340] cluster config:
	{Name:docker-flags-400000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-400000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:47:14.221952    8138 iso.go:125] acquiring lock: {Name:mk2808e0b9510c77af2c0862d3450f3cc996acba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:47:14.230199    8138 out.go:177] * Starting "docker-flags-400000" primary control-plane node in "docker-flags-400000" cluster
	I0729 10:47:14.234226    8138 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 10:47:14.234242    8138 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 10:47:14.234255    8138 cache.go:56] Caching tarball of preloaded images
	I0729 10:47:14.234322    8138 preload.go:172] Found /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 10:47:14.234329    8138 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 10:47:14.234405    8138 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/docker-flags-400000/config.json ...
	I0729 10:47:14.234422    8138 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/docker-flags-400000/config.json: {Name:mkc8e5c4d12b621c205deacffd592ab9fcbe21ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:47:14.234652    8138 start.go:360] acquireMachinesLock for docker-flags-400000: {Name:mk9e50f97d2386769ca02400541d9347efa17b63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:47:14.234691    8138 start.go:364] duration metric: took 31.25µs to acquireMachinesLock for "docker-flags-400000"
	I0729 10:47:14.234704    8138 start.go:93] Provisioning new machine with config: &{Name:docker-flags-400000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-400000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:47:14.234732    8138 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 10:47:14.242244    8138 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 10:47:14.260371    8138 start.go:159] libmachine.API.Create for "docker-flags-400000" (driver="qemu2")
	I0729 10:47:14.260399    8138 client.go:168] LocalClient.Create starting
	I0729 10:47:14.260465    8138 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/ca.pem
	I0729 10:47:14.260494    8138 main.go:141] libmachine: Decoding PEM data...
	I0729 10:47:14.260504    8138 main.go:141] libmachine: Parsing certificate...
	I0729 10:47:14.260541    8138 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/cert.pem
	I0729 10:47:14.260565    8138 main.go:141] libmachine: Decoding PEM data...
	I0729 10:47:14.260572    8138 main.go:141] libmachine: Parsing certificate...
	I0729 10:47:14.260928    8138 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19339-6071/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0729 10:47:14.410807    8138 main.go:141] libmachine: Creating SSH key...
	I0729 10:47:14.481936    8138 main.go:141] libmachine: Creating Disk image...
	I0729 10:47:14.481941    8138 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 10:47:14.482136    8138 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/docker-flags-400000/disk.qcow2.raw /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/docker-flags-400000/disk.qcow2
	I0729 10:47:14.491136    8138 main.go:141] libmachine: STDOUT: 
	I0729 10:47:14.491153    8138 main.go:141] libmachine: STDERR: 
	I0729 10:47:14.491203    8138 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/docker-flags-400000/disk.qcow2 +20000M
	I0729 10:47:14.498916    8138 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 10:47:14.498931    8138 main.go:141] libmachine: STDERR: 
	I0729 10:47:14.498941    8138 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/docker-flags-400000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/docker-flags-400000/disk.qcow2
	I0729 10:47:14.498946    8138 main.go:141] libmachine: Starting QEMU VM...
	I0729 10:47:14.498958    8138 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:47:14.498989    8138 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/docker-flags-400000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/docker-flags-400000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/docker-flags-400000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:3e:5e:e8:c8:ee -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/docker-flags-400000/disk.qcow2
	I0729 10:47:14.500558    8138 main.go:141] libmachine: STDOUT: 
	I0729 10:47:14.500571    8138 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:47:14.500590    8138 client.go:171] duration metric: took 240.191ms to LocalClient.Create
	I0729 10:47:16.502736    8138 start.go:128] duration metric: took 2.268015375s to createHost
	I0729 10:47:16.502801    8138 start.go:83] releasing machines lock for "docker-flags-400000", held for 2.268138s
	W0729 10:47:16.502914    8138 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:47:16.518229    8138 out.go:177] * Deleting "docker-flags-400000" in qemu2 ...
	W0729 10:47:16.543810    8138 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:47:16.543840    8138 start.go:729] Will try again in 5 seconds ...
	I0729 10:47:21.545979    8138 start.go:360] acquireMachinesLock for docker-flags-400000: {Name:mk9e50f97d2386769ca02400541d9347efa17b63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:47:21.739386    8138 start.go:364] duration metric: took 193.241625ms to acquireMachinesLock for "docker-flags-400000"
	I0729 10:47:21.739549    8138 start.go:93] Provisioning new machine with config: &{Name:docker-flags-400000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-400000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:47:21.739741    8138 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 10:47:21.748429    8138 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 10:47:21.799212    8138 start.go:159] libmachine.API.Create for "docker-flags-400000" (driver="qemu2")
	I0729 10:47:21.799267    8138 client.go:168] LocalClient.Create starting
	I0729 10:47:21.799394    8138 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/ca.pem
	I0729 10:47:21.799465    8138 main.go:141] libmachine: Decoding PEM data...
	I0729 10:47:21.799480    8138 main.go:141] libmachine: Parsing certificate...
	I0729 10:47:21.799552    8138 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/cert.pem
	I0729 10:47:21.799597    8138 main.go:141] libmachine: Decoding PEM data...
	I0729 10:47:21.799611    8138 main.go:141] libmachine: Parsing certificate...
	I0729 10:47:21.800217    8138 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19339-6071/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0729 10:47:21.962125    8138 main.go:141] libmachine: Creating SSH key...
	I0729 10:47:22.167299    8138 main.go:141] libmachine: Creating Disk image...
	I0729 10:47:22.167306    8138 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 10:47:22.167572    8138 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/docker-flags-400000/disk.qcow2.raw /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/docker-flags-400000/disk.qcow2
	I0729 10:47:22.177465    8138 main.go:141] libmachine: STDOUT: 
	I0729 10:47:22.177484    8138 main.go:141] libmachine: STDERR: 
	I0729 10:47:22.177533    8138 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/docker-flags-400000/disk.qcow2 +20000M
	I0729 10:47:22.185298    8138 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 10:47:22.185313    8138 main.go:141] libmachine: STDERR: 
	I0729 10:47:22.185331    8138 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/docker-flags-400000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/docker-flags-400000/disk.qcow2
	I0729 10:47:22.185336    8138 main.go:141] libmachine: Starting QEMU VM...
	I0729 10:47:22.185349    8138 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:47:22.185377    8138 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/docker-flags-400000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/docker-flags-400000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/docker-flags-400000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:61:5a:f4:f2:2f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/docker-flags-400000/disk.qcow2
	I0729 10:47:22.186958    8138 main.go:141] libmachine: STDOUT: 
	I0729 10:47:22.186973    8138 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:47:22.186985    8138 client.go:171] duration metric: took 387.718833ms to LocalClient.Create
	I0729 10:47:24.189119    8138 start.go:128] duration metric: took 2.449394625s to createHost
	I0729 10:47:24.189165    8138 start.go:83] releasing machines lock for "docker-flags-400000", held for 2.449773083s
	W0729 10:47:24.189466    8138 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-400000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-400000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:47:24.208015    8138 out.go:177] 
	W0729 10:47:24.211025    8138 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 10:47:24.211076    8138 out.go:239] * 
	* 
	W0729 10:47:24.213551    8138 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 10:47:24.223948    8138 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-400000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-400000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-400000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (80.179333ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-400000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-400000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-400000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-400000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-400000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-400000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-400000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-400000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-400000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (47.538083ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-400000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-400000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-400000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-400000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-400000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-400000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-07-29 10:47:24.370938 -0700 PDT m=+746.398885751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-400000 -n docker-flags-400000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-400000 -n docker-flags-400000: exit status 7 (28.907375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-400000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-400000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-400000
--- FAIL: TestDockerFlags (10.36s)

                                                
                                    
x
+
TestForceSystemdFlag (10.2s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-917000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-917000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.012624542s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-917000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19339
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-917000" primary control-plane node in "force-systemd-flag-917000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-917000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:47:09.094252    8117 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:47:09.094380    8117 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:47:09.094383    8117 out.go:304] Setting ErrFile to fd 2...
	I0729 10:47:09.094385    8117 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:47:09.094511    8117 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 10:47:09.095569    8117 out.go:298] Setting JSON to false
	I0729 10:47:09.111533    8117 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4598,"bootTime":1722270631,"procs":451,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 10:47:09.111597    8117 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 10:47:09.117521    8117 out.go:177] * [force-systemd-flag-917000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 10:47:09.124509    8117 out.go:177]   - MINIKUBE_LOCATION=19339
	I0729 10:47:09.124543    8117 notify.go:220] Checking for updates...
	I0729 10:47:09.132446    8117 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	I0729 10:47:09.136443    8117 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 10:47:09.139511    8117 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:47:09.142461    8117 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	I0729 10:47:09.149457    8117 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 10:47:09.153742    8117 config.go:182] Loaded profile config "force-systemd-env-810000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:47:09.153817    8117 config.go:182] Loaded profile config "multinode-263000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:47:09.153887    8117 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:47:09.157454    8117 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 10:47:09.163399    8117 start.go:297] selected driver: qemu2
	I0729 10:47:09.163406    8117 start.go:901] validating driver "qemu2" against <nil>
	I0729 10:47:09.163411    8117 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:47:09.165817    8117 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 10:47:09.169450    8117 out.go:177] * Automatically selected the socket_vmnet network
	I0729 10:47:09.172573    8117 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 10:47:09.172591    8117 cni.go:84] Creating CNI manager for ""
	I0729 10:47:09.172598    8117 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 10:47:09.172603    8117 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 10:47:09.172649    8117 start.go:340] cluster config:
	{Name:force-systemd-flag-917000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-917000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:47:09.176403    8117 iso.go:125] acquiring lock: {Name:mk2808e0b9510c77af2c0862d3450f3cc996acba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:47:09.185454    8117 out.go:177] * Starting "force-systemd-flag-917000" primary control-plane node in "force-systemd-flag-917000" cluster
	I0729 10:47:09.188454    8117 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 10:47:09.188468    8117 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 10:47:09.188476    8117 cache.go:56] Caching tarball of preloaded images
	I0729 10:47:09.188529    8117 preload.go:172] Found /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 10:47:09.188535    8117 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 10:47:09.188590    8117 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/force-systemd-flag-917000/config.json ...
	I0729 10:47:09.188600    8117 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/force-systemd-flag-917000/config.json: {Name:mkd2ddf2a2ceac5c1b0f2411fdc297ce17cdcdfb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:47:09.188830    8117 start.go:360] acquireMachinesLock for force-systemd-flag-917000: {Name:mk9e50f97d2386769ca02400541d9347efa17b63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:47:09.188866    8117 start.go:364] duration metric: took 29.417µs to acquireMachinesLock for "force-systemd-flag-917000"
	I0729 10:47:09.188879    8117 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-917000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-917000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:47:09.188910    8117 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 10:47:09.196422    8117 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 10:47:09.214585    8117 start.go:159] libmachine.API.Create for "force-systemd-flag-917000" (driver="qemu2")
	I0729 10:47:09.214607    8117 client.go:168] LocalClient.Create starting
	I0729 10:47:09.214678    8117 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/ca.pem
	I0729 10:47:09.214709    8117 main.go:141] libmachine: Decoding PEM data...
	I0729 10:47:09.214722    8117 main.go:141] libmachine: Parsing certificate...
	I0729 10:47:09.214765    8117 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/cert.pem
	I0729 10:47:09.214789    8117 main.go:141] libmachine: Decoding PEM data...
	I0729 10:47:09.214799    8117 main.go:141] libmachine: Parsing certificate...
	I0729 10:47:09.215168    8117 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19339-6071/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0729 10:47:09.363015    8117 main.go:141] libmachine: Creating SSH key...
	I0729 10:47:09.556954    8117 main.go:141] libmachine: Creating Disk image...
	I0729 10:47:09.556962    8117 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 10:47:09.557174    8117 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/force-systemd-flag-917000/disk.qcow2.raw /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/force-systemd-flag-917000/disk.qcow2
	I0729 10:47:09.566855    8117 main.go:141] libmachine: STDOUT: 
	I0729 10:47:09.566878    8117 main.go:141] libmachine: STDERR: 
	I0729 10:47:09.566933    8117 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/force-systemd-flag-917000/disk.qcow2 +20000M
	I0729 10:47:09.574836    8117 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 10:47:09.574851    8117 main.go:141] libmachine: STDERR: 
	I0729 10:47:09.574865    8117 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/force-systemd-flag-917000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/force-systemd-flag-917000/disk.qcow2
	I0729 10:47:09.574869    8117 main.go:141] libmachine: Starting QEMU VM...
	I0729 10:47:09.574881    8117 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:47:09.574910    8117 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/force-systemd-flag-917000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/force-systemd-flag-917000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/force-systemd-flag-917000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:10:6a:d5:ca:66 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/force-systemd-flag-917000/disk.qcow2
	I0729 10:47:09.576569    8117 main.go:141] libmachine: STDOUT: 
	I0729 10:47:09.576586    8117 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:47:09.576606    8117 client.go:171] duration metric: took 362.000084ms to LocalClient.Create
	I0729 10:47:11.578744    8117 start.go:128] duration metric: took 2.389855708s to createHost
	I0729 10:47:11.578783    8117 start.go:83] releasing machines lock for "force-systemd-flag-917000", held for 2.389947834s
	W0729 10:47:11.578836    8117 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:47:11.603799    8117 out.go:177] * Deleting "force-systemd-flag-917000" in qemu2 ...
	W0729 10:47:11.624507    8117 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:47:11.624540    8117 start.go:729] Will try again in 5 seconds ...
	I0729 10:47:16.626660    8117 start.go:360] acquireMachinesLock for force-systemd-flag-917000: {Name:mk9e50f97d2386769ca02400541d9347efa17b63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:47:16.627111    8117 start.go:364] duration metric: took 352.583µs to acquireMachinesLock for "force-systemd-flag-917000"
	I0729 10:47:16.627224    8117 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-917000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-917000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:47:16.627517    8117 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 10:47:16.636833    8117 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 10:47:16.686295    8117 start.go:159] libmachine.API.Create for "force-systemd-flag-917000" (driver="qemu2")
	I0729 10:47:16.686349    8117 client.go:168] LocalClient.Create starting
	I0729 10:47:16.686470    8117 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/ca.pem
	I0729 10:47:16.686538    8117 main.go:141] libmachine: Decoding PEM data...
	I0729 10:47:16.686557    8117 main.go:141] libmachine: Parsing certificate...
	I0729 10:47:16.686643    8117 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/cert.pem
	I0729 10:47:16.686691    8117 main.go:141] libmachine: Decoding PEM data...
	I0729 10:47:16.686700    8117 main.go:141] libmachine: Parsing certificate...
	I0729 10:47:16.687867    8117 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19339-6071/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0729 10:47:16.854826    8117 main.go:141] libmachine: Creating SSH key...
	I0729 10:47:17.016313    8117 main.go:141] libmachine: Creating Disk image...
	I0729 10:47:17.016319    8117 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 10:47:17.016543    8117 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/force-systemd-flag-917000/disk.qcow2.raw /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/force-systemd-flag-917000/disk.qcow2
	I0729 10:47:17.026159    8117 main.go:141] libmachine: STDOUT: 
	I0729 10:47:17.026179    8117 main.go:141] libmachine: STDERR: 
	I0729 10:47:17.026247    8117 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/force-systemd-flag-917000/disk.qcow2 +20000M
	I0729 10:47:17.033984    8117 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 10:47:17.034004    8117 main.go:141] libmachine: STDERR: 
	I0729 10:47:17.034017    8117 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/force-systemd-flag-917000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/force-systemd-flag-917000/disk.qcow2
	I0729 10:47:17.034022    8117 main.go:141] libmachine: Starting QEMU VM...
	I0729 10:47:17.034028    8117 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:47:17.034064    8117 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/force-systemd-flag-917000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/force-systemd-flag-917000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/force-systemd-flag-917000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:ad:91:de:fc:6e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/force-systemd-flag-917000/disk.qcow2
	I0729 10:47:17.035716    8117 main.go:141] libmachine: STDOUT: 
	I0729 10:47:17.035730    8117 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:47:17.035743    8117 client.go:171] duration metric: took 349.393667ms to LocalClient.Create
	I0729 10:47:19.037886    8117 start.go:128] duration metric: took 2.410386125s to createHost
	I0729 10:47:19.037945    8117 start.go:83] releasing machines lock for "force-systemd-flag-917000", held for 2.410845834s
	W0729 10:47:19.038349    8117 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-917000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-917000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:47:19.046820    8117 out.go:177] 
	W0729 10:47:19.053889    8117 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 10:47:19.053914    8117 out.go:239] * 
	* 
	W0729 10:47:19.056691    8117 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 10:47:19.064854    8117 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-917000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-917000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-917000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (81.369708ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-917000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-917000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-917000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-07-29 10:47:19.16399 -0700 PDT m=+741.191849167
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-917000 -n force-systemd-flag-917000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-917000 -n force-systemd-flag-917000: exit status 7 (34.189208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-917000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-917000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-917000
--- FAIL: TestForceSystemdFlag (10.20s)

                                                
                                    
x
+
TestForceSystemdEnv (10.15s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-810000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-810000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.950709833s)

                                                
                                                
-- stdout --
	* [force-systemd-env-810000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19339
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-810000" primary control-plane node in "force-systemd-env-810000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-810000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:47:03.996597    8085 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:47:03.996719    8085 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:47:03.996723    8085 out.go:304] Setting ErrFile to fd 2...
	I0729 10:47:03.996739    8085 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:47:03.996863    8085 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 10:47:03.998014    8085 out.go:298] Setting JSON to false
	I0729 10:47:04.014565    8085 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4593,"bootTime":1722270631,"procs":450,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 10:47:04.014635    8085 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 10:47:04.020086    8085 out.go:177] * [force-systemd-env-810000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 10:47:04.028057    8085 notify.go:220] Checking for updates...
	I0729 10:47:04.032781    8085 out.go:177]   - MINIKUBE_LOCATION=19339
	I0729 10:47:04.035910    8085 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	I0729 10:47:04.038903    8085 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 10:47:04.041921    8085 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:47:04.044963    8085 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	I0729 10:47:04.047907    8085 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0729 10:47:04.051338    8085 config.go:182] Loaded profile config "multinode-263000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:47:04.051405    8085 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:47:04.055915    8085 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 10:47:04.065870    8085 start.go:297] selected driver: qemu2
	I0729 10:47:04.065877    8085 start.go:901] validating driver "qemu2" against <nil>
	I0729 10:47:04.065883    8085 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:47:04.068171    8085 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 10:47:04.073750    8085 out.go:177] * Automatically selected the socket_vmnet network
	I0729 10:47:04.081976    8085 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 10:47:04.082001    8085 cni.go:84] Creating CNI manager for ""
	I0729 10:47:04.082008    8085 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 10:47:04.082017    8085 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 10:47:04.082044    8085 start.go:340] cluster config:
	{Name:force-systemd-env-810000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-810000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:47:04.085813    8085 iso.go:125] acquiring lock: {Name:mk2808e0b9510c77af2c0862d3450f3cc996acba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:47:04.092850    8085 out.go:177] * Starting "force-systemd-env-810000" primary control-plane node in "force-systemd-env-810000" cluster
	I0729 10:47:04.096888    8085 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 10:47:04.096937    8085 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 10:47:04.096947    8085 cache.go:56] Caching tarball of preloaded images
	I0729 10:47:04.097075    8085 preload.go:172] Found /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 10:47:04.097091    8085 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 10:47:04.097164    8085 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/force-systemd-env-810000/config.json ...
	I0729 10:47:04.097174    8085 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/force-systemd-env-810000/config.json: {Name:mk44ce7d73c785b566ec52a903ed7010537c3719 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:47:04.097408    8085 start.go:360] acquireMachinesLock for force-systemd-env-810000: {Name:mk9e50f97d2386769ca02400541d9347efa17b63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:47:04.097439    8085 start.go:364] duration metric: took 25.666µs to acquireMachinesLock for "force-systemd-env-810000"
	I0729 10:47:04.097450    8085 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-810000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-810000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:47:04.097501    8085 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 10:47:04.101901    8085 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 10:47:04.117507    8085 start.go:159] libmachine.API.Create for "force-systemd-env-810000" (driver="qemu2")
	I0729 10:47:04.117533    8085 client.go:168] LocalClient.Create starting
	I0729 10:47:04.117604    8085 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/ca.pem
	I0729 10:47:04.117680    8085 main.go:141] libmachine: Decoding PEM data...
	I0729 10:47:04.117691    8085 main.go:141] libmachine: Parsing certificate...
	I0729 10:47:04.117732    8085 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/cert.pem
	I0729 10:47:04.117757    8085 main.go:141] libmachine: Decoding PEM data...
	I0729 10:47:04.117765    8085 main.go:141] libmachine: Parsing certificate...
	I0729 10:47:04.118099    8085 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19339-6071/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0729 10:47:04.267162    8085 main.go:141] libmachine: Creating SSH key...
	I0729 10:47:04.350229    8085 main.go:141] libmachine: Creating Disk image...
	I0729 10:47:04.350242    8085 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 10:47:04.350455    8085 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/force-systemd-env-810000/disk.qcow2.raw /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/force-systemd-env-810000/disk.qcow2
	I0729 10:47:04.359841    8085 main.go:141] libmachine: STDOUT: 
	I0729 10:47:04.359859    8085 main.go:141] libmachine: STDERR: 
	I0729 10:47:04.359902    8085 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/force-systemd-env-810000/disk.qcow2 +20000M
	I0729 10:47:04.368067    8085 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 10:47:04.368083    8085 main.go:141] libmachine: STDERR: 
	I0729 10:47:04.368096    8085 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/force-systemd-env-810000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/force-systemd-env-810000/disk.qcow2
	I0729 10:47:04.368105    8085 main.go:141] libmachine: Starting QEMU VM...
	I0729 10:47:04.368125    8085 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:47:04.368152    8085 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/force-systemd-env-810000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/force-systemd-env-810000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/force-systemd-env-810000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:ba:56:4a:0b:43 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/force-systemd-env-810000/disk.qcow2
	I0729 10:47:04.369842    8085 main.go:141] libmachine: STDOUT: 
	I0729 10:47:04.369857    8085 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:47:04.369876    8085 client.go:171] duration metric: took 252.343167ms to LocalClient.Create
	I0729 10:47:06.372051    8085 start.go:128] duration metric: took 2.274554583s to createHost
	I0729 10:47:06.372108    8085 start.go:83] releasing machines lock for "force-systemd-env-810000", held for 2.274698375s
	W0729 10:47:06.372173    8085 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:47:06.378527    8085 out.go:177] * Deleting "force-systemd-env-810000" in qemu2 ...
	W0729 10:47:06.411297    8085 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:47:06.411327    8085 start.go:729] Will try again in 5 seconds ...
	I0729 10:47:11.413479    8085 start.go:360] acquireMachinesLock for force-systemd-env-810000: {Name:mk9e50f97d2386769ca02400541d9347efa17b63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:47:11.578894    8085 start.go:364] duration metric: took 165.263417ms to acquireMachinesLock for "force-systemd-env-810000"
	I0729 10:47:11.579007    8085 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-810000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-810000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:47:11.579216    8085 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 10:47:11.592770    8085 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 10:47:11.642036    8085 start.go:159] libmachine.API.Create for "force-systemd-env-810000" (driver="qemu2")
	I0729 10:47:11.642094    8085 client.go:168] LocalClient.Create starting
	I0729 10:47:11.642232    8085 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/ca.pem
	I0729 10:47:11.642291    8085 main.go:141] libmachine: Decoding PEM data...
	I0729 10:47:11.642309    8085 main.go:141] libmachine: Parsing certificate...
	I0729 10:47:11.642373    8085 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/cert.pem
	I0729 10:47:11.642419    8085 main.go:141] libmachine: Decoding PEM data...
	I0729 10:47:11.642433    8085 main.go:141] libmachine: Parsing certificate...
	I0729 10:47:11.643069    8085 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19339-6071/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0729 10:47:11.803350    8085 main.go:141] libmachine: Creating SSH key...
	I0729 10:47:11.846846    8085 main.go:141] libmachine: Creating Disk image...
	I0729 10:47:11.846852    8085 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 10:47:11.847088    8085 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/force-systemd-env-810000/disk.qcow2.raw /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/force-systemd-env-810000/disk.qcow2
	I0729 10:47:11.856337    8085 main.go:141] libmachine: STDOUT: 
	I0729 10:47:11.856368    8085 main.go:141] libmachine: STDERR: 
	I0729 10:47:11.856415    8085 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/force-systemd-env-810000/disk.qcow2 +20000M
	I0729 10:47:11.864250    8085 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 10:47:11.864265    8085 main.go:141] libmachine: STDERR: 
	I0729 10:47:11.864277    8085 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/force-systemd-env-810000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/force-systemd-env-810000/disk.qcow2
	I0729 10:47:11.864281    8085 main.go:141] libmachine: Starting QEMU VM...
	I0729 10:47:11.864291    8085 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:47:11.864332    8085 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/force-systemd-env-810000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/force-systemd-env-810000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/force-systemd-env-810000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:05:91:3d:ac:36 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/force-systemd-env-810000/disk.qcow2
	I0729 10:47:11.866057    8085 main.go:141] libmachine: STDOUT: 
	I0729 10:47:11.866071    8085 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:47:11.866085    8085 client.go:171] duration metric: took 223.990167ms to LocalClient.Create
	I0729 10:47:13.868223    8085 start.go:128] duration metric: took 2.289015834s to createHost
	I0729 10:47:13.868285    8085 start.go:83] releasing machines lock for "force-systemd-env-810000", held for 2.289382209s
	W0729 10:47:13.868745    8085 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-810000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-810000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:47:13.888294    8085 out.go:177] 
	W0729 10:47:13.892437    8085 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 10:47:13.892505    8085 out.go:239] * 
	* 
	W0729 10:47:13.895288    8085 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 10:47:13.903187    8085 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-810000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-810000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-810000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (79.752167ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-810000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-810000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-810000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-07-29 10:47:14.000294 -0700 PDT m=+736.028066626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-810000 -n force-systemd-env-810000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-810000 -n force-systemd-env-810000: exit status 7 (33.018125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-810000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-810000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-810000
--- FAIL: TestForceSystemdEnv (10.15s)

                                                
                                    
x
+
TestErrorSpam/setup (9.81s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-634000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-634000 --driver=qemu2 
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p nospam-634000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-634000 --driver=qemu2 : exit status 80 (9.804897417s)

                                                
                                                
-- stdout --
	* [nospam-634000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19339
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "nospam-634000" primary control-plane node in "nospam-634000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "nospam-634000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p nospam-634000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-darwin-arm64 start -p nospam-634000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-634000 --driver=qemu2 " failed: exit status 80
error_spam_test.go:96: unexpected stderr: "! StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* Failed to start qemu2 VM. Running \"minikube delete -p nospam-634000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:110: minikube stdout:
* [nospam-634000] minikube v1.33.1 on Darwin 14.5 (arm64)
- MINIKUBE_LOCATION=19339
- KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "nospam-634000" primary control-plane node in "nospam-634000" cluster
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "nospam-634000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p nospam-634000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
error_spam_test.go:121: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:121: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:121: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (9.81s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (9.84s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-863000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2230: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-863000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : exit status 80 (9.768712084s)

                                                
                                                
-- stdout --
	* [functional-863000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19339
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "functional-863000" primary control-plane node in "functional-863000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "functional-863000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51067 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51067 to docker env.
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51067 to docker env.
	* Failed to start qemu2 VM. Running "minikube delete -p functional-863000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2232: failed minikube start. args "out/minikube-darwin-arm64 start -p functional-863000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 ": exit status 80
functional_test.go:2237: start stdout=* [functional-863000] minikube v1.33.1 on Darwin 14.5 (arm64)
- MINIKUBE_LOCATION=19339
- KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "functional-863000" primary control-plane node in "functional-863000" cluster
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "functional-863000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
, want: *Found network options:*
functional_test.go:2242: start stderr=! Local proxy ignored: not passing HTTP_PROXY=localhost:51067 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:51067 to docker env.
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! Local proxy ignored: not passing HTTP_PROXY=localhost:51067 to docker env.
* Failed to start qemu2 VM. Running "minikube delete -p functional-863000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
, want: *You appear to be using a proxy*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-863000 -n functional-863000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-863000 -n functional-863000: exit status 7 (65.2825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-863000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/StartWithProxy (9.84s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.27s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-863000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-863000 --alsologtostderr -v=8: exit status 80 (5.198338125s)

                                                
                                                
-- stdout --
	* [functional-863000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19339
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-863000" primary control-plane node in "functional-863000" cluster
	* Restarting existing qemu2 VM for "functional-863000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-863000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:36:17.149398    6802 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:36:17.149518    6802 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:36:17.149521    6802 out.go:304] Setting ErrFile to fd 2...
	I0729 10:36:17.149524    6802 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:36:17.149629    6802 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 10:36:17.150589    6802 out.go:298] Setting JSON to false
	I0729 10:36:17.166598    6802 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3946,"bootTime":1722270631,"procs":442,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 10:36:17.166666    6802 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 10:36:17.171751    6802 out.go:177] * [functional-863000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 10:36:17.178953    6802 notify.go:220] Checking for updates...
	I0729 10:36:17.183838    6802 out.go:177]   - MINIKUBE_LOCATION=19339
	I0729 10:36:17.190812    6802 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	I0729 10:36:17.194825    6802 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 10:36:17.202873    6802 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:36:17.205891    6802 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	I0729 10:36:17.208931    6802 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 10:36:17.213179    6802 config.go:182] Loaded profile config "functional-863000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:36:17.213232    6802 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:36:17.217854    6802 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 10:36:17.222873    6802 start.go:297] selected driver: qemu2
	I0729 10:36:17.222880    6802 start.go:901] validating driver "qemu2" against &{Name:functional-863000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-863000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:36:17.222928    6802 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:36:17.225127    6802 cni.go:84] Creating CNI manager for ""
	I0729 10:36:17.225145    6802 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 10:36:17.225188    6802 start.go:340] cluster config:
	{Name:functional-863000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-863000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:36:17.228764    6802 iso.go:125] acquiring lock: {Name:mk2808e0b9510c77af2c0862d3450f3cc996acba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:36:17.236879    6802 out.go:177] * Starting "functional-863000" primary control-plane node in "functional-863000" cluster
	I0729 10:36:17.240864    6802 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 10:36:17.240886    6802 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 10:36:17.240895    6802 cache.go:56] Caching tarball of preloaded images
	I0729 10:36:17.240954    6802 preload.go:172] Found /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 10:36:17.240960    6802 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 10:36:17.241018    6802 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/functional-863000/config.json ...
	I0729 10:36:17.241446    6802 start.go:360] acquireMachinesLock for functional-863000: {Name:mk9e50f97d2386769ca02400541d9347efa17b63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:36:17.241484    6802 start.go:364] duration metric: took 32.125µs to acquireMachinesLock for "functional-863000"
	I0729 10:36:17.241495    6802 start.go:96] Skipping create...Using existing machine configuration
	I0729 10:36:17.241501    6802 fix.go:54] fixHost starting: 
	I0729 10:36:17.241630    6802 fix.go:112] recreateIfNeeded on functional-863000: state=Stopped err=<nil>
	W0729 10:36:17.241639    6802 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 10:36:17.249845    6802 out.go:177] * Restarting existing qemu2 VM for "functional-863000" ...
	I0729 10:36:17.253910    6802 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:36:17.253952    6802 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/functional-863000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/functional-863000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/functional-863000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:1b:b5:a1:ba:77 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/functional-863000/disk.qcow2
	I0729 10:36:17.256160    6802 main.go:141] libmachine: STDOUT: 
	I0729 10:36:17.256182    6802 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:36:17.256215    6802 fix.go:56] duration metric: took 14.713792ms for fixHost
	I0729 10:36:17.256219    6802 start.go:83] releasing machines lock for "functional-863000", held for 14.73025ms
	W0729 10:36:17.256227    6802 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 10:36:17.256271    6802 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:36:17.256290    6802 start.go:729] Will try again in 5 seconds ...
	I0729 10:36:22.258576    6802 start.go:360] acquireMachinesLock for functional-863000: {Name:mk9e50f97d2386769ca02400541d9347efa17b63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:36:22.259087    6802 start.go:364] duration metric: took 397.125µs to acquireMachinesLock for "functional-863000"
	I0729 10:36:22.259234    6802 start.go:96] Skipping create...Using existing machine configuration
	I0729 10:36:22.259257    6802 fix.go:54] fixHost starting: 
	I0729 10:36:22.260021    6802 fix.go:112] recreateIfNeeded on functional-863000: state=Stopped err=<nil>
	W0729 10:36:22.260063    6802 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 10:36:22.267358    6802 out.go:177] * Restarting existing qemu2 VM for "functional-863000" ...
	I0729 10:36:22.271535    6802 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:36:22.271712    6802 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/functional-863000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/functional-863000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/functional-863000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:1b:b5:a1:ba:77 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/functional-863000/disk.qcow2
	I0729 10:36:22.281075    6802 main.go:141] libmachine: STDOUT: 
	I0729 10:36:22.281146    6802 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:36:22.281245    6802 fix.go:56] duration metric: took 21.993208ms for fixHost
	I0729 10:36:22.281264    6802 start.go:83] releasing machines lock for "functional-863000", held for 22.150584ms
	W0729 10:36:22.281457    6802 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-863000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-863000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:36:22.289398    6802 out.go:177] 
	W0729 10:36:22.293431    6802 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 10:36:22.293448    6802 out.go:239] * 
	* 
	W0729 10:36:22.296063    6802 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 10:36:22.304262    6802 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:657: failed to soft start minikube. args "out/minikube-darwin-arm64 start -p functional-863000 --alsologtostderr -v=8": exit status 80
functional_test.go:659: soft start took 5.200147084s for "functional-863000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-863000 -n functional-863000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-863000 -n functional-863000: exit status 7 (67.514875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-863000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (5.27s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
functional_test.go:677: (dbg) Non-zero exit: kubectl config current-context: exit status 1 (29.981958ms)

                                                
                                                
** stderr ** 
	error: current-context is not set

                                                
                                                
** /stderr **
functional_test.go:679: failed to get current-context. args "kubectl config current-context" : exit status 1
functional_test.go:683: expected current-context = "functional-863000", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-863000 -n functional-863000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-863000 -n functional-863000: exit status 7 (30.249708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-863000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-863000 get po -A
functional_test.go:692: (dbg) Non-zero exit: kubectl --context functional-863000 get po -A: exit status 1 (26.110625ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-863000

                                                
                                                
** /stderr **
functional_test.go:694: failed to get kubectl pods: args "kubectl --context functional-863000 get po -A" : exit status 1
functional_test.go:698: expected stderr to be empty but got *"Error in configuration: context was not found for specified context: functional-863000\n"*: args "kubectl --context functional-863000 get po -A"
functional_test.go:701: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-863000 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-863000 -n functional-863000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-863000 -n functional-863000: exit status 7 (30.170875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-863000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 ssh sudo crictl images
functional_test.go:1120: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-863000 ssh sudo crictl images: exit status 83 (40.886333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-863000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-863000"

                                                
                                                
-- /stdout --
functional_test.go:1122: failed to get images by "out/minikube-darwin-arm64 -p functional-863000 ssh sudo crictl images" ssh exit status 83
functional_test.go:1126: expected sha for pause:3.3 "3d18732f8686c" to be in the output but got *
-- stdout --
	* The control-plane node functional-863000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-863000"

                                                
                                                
-- /stdout --*
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-863000 ssh sudo docker rmi registry.k8s.io/pause:latest: exit status 83 (39.860834ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-863000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-863000"

                                                
                                                
-- /stdout --
functional_test.go:1146: failed to manually delete image "out/minikube-darwin-arm64 -p functional-863000 ssh sudo docker rmi registry.k8s.io/pause:latest" : exit status 83
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-863000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (39.900792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-863000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-863000"

                                                
                                                
-- /stdout --
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-863000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (38.855209ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-863000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-863000"

                                                
                                                
-- /stdout --
functional_test.go:1161: expected "out/minikube-darwin-arm64 -p functional-863000 ssh sudo crictl inspecti registry.k8s.io/pause:latest" to run successfully but got error: exit status 83
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.74s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 kubectl -- --context functional-863000 get pods
functional_test.go:712: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-863000 kubectl -- --context functional-863000 get pods: exit status 1 (704.327083ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-863000
	* no server found for cluster "functional-863000"

                                                
                                                
** /stderr **
functional_test.go:715: failed to get pods. args "out/minikube-darwin-arm64 -p functional-863000 kubectl -- --context functional-863000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-863000 -n functional-863000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-863000 -n functional-863000: exit status 7 (31.716834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-863000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (0.74s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.97s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-863000 get pods
functional_test.go:737: (dbg) Non-zero exit: out/kubectl --context functional-863000 get pods: exit status 1 (939.652834ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-863000
	* no server found for cluster "functional-863000"

                                                
                                                
** /stderr **
functional_test.go:740: failed to run kubectl directly. args "out/kubectl --context functional-863000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-863000 -n functional-863000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-863000 -n functional-863000: exit status 7 (29.375292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-863000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.97s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (5.26s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-863000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-863000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (5.194537541s)

                                                
                                                
-- stdout --
	* [functional-863000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19339
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-863000" primary control-plane node in "functional-863000" cluster
	* Restarting existing qemu2 VM for "functional-863000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-863000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-863000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:755: failed to restart minikube. args "out/minikube-darwin-arm64 start -p functional-863000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:757: restart took 5.19516025s for "functional-863000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-863000 -n functional-863000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-863000 -n functional-863000: exit status 7 (66.287833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-863000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (5.26s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-863000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:806: (dbg) Non-zero exit: kubectl --context functional-863000 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (29.273709ms)

                                                
                                                
** stderr ** 
	error: context "functional-863000" does not exist

                                                
                                                
** /stderr **
functional_test.go:808: failed to get components. args "kubectl --context functional-863000 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-863000 -n functional-863000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-863000 -n functional-863000: exit status 7 (29.956292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-863000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 logs
functional_test.go:1232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-863000 logs: exit status 83 (75.676708ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                  | download-only-403000 | jenkins | v1.33.1 | 29 Jul 24 10:34 PDT |                     |
	|         | -p download-only-403000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT | 29 Jul 24 10:35 PDT |
	| delete  | -p download-only-403000                                                  | download-only-403000 | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT | 29 Jul 24 10:35 PDT |
	| start   | -o=json --download-only                                                  | download-only-310000 | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT |                     |
	|         | -p download-only-310000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT | 29 Jul 24 10:35 PDT |
	| delete  | -p download-only-310000                                                  | download-only-310000 | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT | 29 Jul 24 10:35 PDT |
	| start   | -o=json --download-only                                                  | download-only-732000 | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT |                     |
	|         | -p download-only-732000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                                      |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT | 29 Jul 24 10:35 PDT |
	| delete  | -p download-only-732000                                                  | download-only-732000 | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT | 29 Jul 24 10:35 PDT |
	| delete  | -p download-only-403000                                                  | download-only-403000 | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT | 29 Jul 24 10:35 PDT |
	| delete  | -p download-only-310000                                                  | download-only-310000 | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT | 29 Jul 24 10:35 PDT |
	| delete  | -p download-only-732000                                                  | download-only-732000 | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT | 29 Jul 24 10:35 PDT |
	| start   | --download-only -p                                                       | binary-mirror-184000 | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT |                     |
	|         | binary-mirror-184000                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
	|         | --binary-mirror                                                          |                      |         |         |                     |                     |
	|         | http://127.0.0.1:51031                                                   |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-184000                                                  | binary-mirror-184000 | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT | 29 Jul 24 10:35 PDT |
	| addons  | enable dashboard -p                                                      | addons-166000        | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT |                     |
	|         | addons-166000                                                            |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                     | addons-166000        | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT |                     |
	|         | addons-166000                                                            |                      |         |         |                     |                     |
	| start   | -p addons-166000 --wait=true                                             | addons-166000        | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT |                     |
	|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
	|         | --addons=registry                                                        |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
	|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
	| delete  | -p addons-166000                                                         | addons-166000        | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT | 29 Jul 24 10:35 PDT |
	| start   | -p nospam-634000 -n=1 --memory=2250 --wait=false                         | nospam-634000        | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT |                     |
	|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-634000 |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| start   | nospam-634000 --log_dir                                                  | nospam-634000        | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-634000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-634000 --log_dir                                                  | nospam-634000        | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-634000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-634000 --log_dir                                                  | nospam-634000        | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-634000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| pause   | nospam-634000 --log_dir                                                  | nospam-634000        | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-634000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-634000 --log_dir                                                  | nospam-634000        | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-634000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-634000 --log_dir                                                  | nospam-634000        | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-634000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| unpause | nospam-634000 --log_dir                                                  | nospam-634000        | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-634000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-634000 --log_dir                                                  | nospam-634000        | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-634000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-634000 --log_dir                                                  | nospam-634000        | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-634000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| stop    | nospam-634000 --log_dir                                                  | nospam-634000        | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT | 29 Jul 24 10:35 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-634000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-634000 --log_dir                                                  | nospam-634000        | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT | 29 Jul 24 10:36 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-634000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-634000 --log_dir                                                  | nospam-634000        | jenkins | v1.33.1 | 29 Jul 24 10:36 PDT | 29 Jul 24 10:36 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-634000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| delete  | -p nospam-634000                                                         | nospam-634000        | jenkins | v1.33.1 | 29 Jul 24 10:36 PDT | 29 Jul 24 10:36 PDT |
	| start   | -p functional-863000                                                     | functional-863000    | jenkins | v1.33.1 | 29 Jul 24 10:36 PDT |                     |
	|         | --memory=4000                                                            |                      |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
	|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
	| start   | -p functional-863000                                                     | functional-863000    | jenkins | v1.33.1 | 29 Jul 24 10:36 PDT |                     |
	|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
	| cache   | functional-863000 cache add                                              | functional-863000    | jenkins | v1.33.1 | 29 Jul 24 10:36 PDT | 29 Jul 24 10:36 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | functional-863000 cache add                                              | functional-863000    | jenkins | v1.33.1 | 29 Jul 24 10:36 PDT | 29 Jul 24 10:36 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | functional-863000 cache add                                              | functional-863000    | jenkins | v1.33.1 | 29 Jul 24 10:36 PDT | 29 Jul 24 10:36 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-863000 cache add                                              | functional-863000    | jenkins | v1.33.1 | 29 Jul 24 10:36 PDT | 29 Jul 24 10:36 PDT |
	|         | minikube-local-cache-test:functional-863000                              |                      |         |         |                     |                     |
	| cache   | functional-863000 cache delete                                           | functional-863000    | jenkins | v1.33.1 | 29 Jul 24 10:36 PDT | 29 Jul 24 10:36 PDT |
	|         | minikube-local-cache-test:functional-863000                              |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 29 Jul 24 10:36 PDT | 29 Jul 24 10:36 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 29 Jul 24 10:36 PDT | 29 Jul 24 10:36 PDT |
	| ssh     | functional-863000 ssh sudo                                               | functional-863000    | jenkins | v1.33.1 | 29 Jul 24 10:36 PDT |                     |
	|         | crictl images                                                            |                      |         |         |                     |                     |
	| ssh     | functional-863000                                                        | functional-863000    | jenkins | v1.33.1 | 29 Jul 24 10:36 PDT |                     |
	|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| ssh     | functional-863000 ssh                                                    | functional-863000    | jenkins | v1.33.1 | 29 Jul 24 10:36 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-863000 cache reload                                           | functional-863000    | jenkins | v1.33.1 | 29 Jul 24 10:36 PDT | 29 Jul 24 10:36 PDT |
	| ssh     | functional-863000 ssh                                                    | functional-863000    | jenkins | v1.33.1 | 29 Jul 24 10:36 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 29 Jul 24 10:36 PDT | 29 Jul 24 10:36 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 29 Jul 24 10:36 PDT | 29 Jul 24 10:36 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| kubectl | functional-863000 kubectl --                                             | functional-863000    | jenkins | v1.33.1 | 29 Jul 24 10:36 PDT |                     |
	|         | --context functional-863000                                              |                      |         |         |                     |                     |
	|         | get pods                                                                 |                      |         |         |                     |                     |
	| start   | -p functional-863000                                                     | functional-863000    | jenkins | v1.33.1 | 29 Jul 24 10:36 PDT |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
	|         | --wait=all                                                               |                      |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 10:36:27
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 10:36:27.415612    6881 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:36:27.415734    6881 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:36:27.415736    6881 out.go:304] Setting ErrFile to fd 2...
	I0729 10:36:27.415738    6881 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:36:27.415882    6881 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 10:36:27.416895    6881 out.go:298] Setting JSON to false
	I0729 10:36:27.432902    6881 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3956,"bootTime":1722270631,"procs":447,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 10:36:27.433002    6881 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 10:36:27.437518    6881 out.go:177] * [functional-863000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 10:36:27.446333    6881 out.go:177]   - MINIKUBE_LOCATION=19339
	I0729 10:36:27.446381    6881 notify.go:220] Checking for updates...
	I0729 10:36:27.454231    6881 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	I0729 10:36:27.457360    6881 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 10:36:27.460393    6881 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:36:27.463400    6881 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	I0729 10:36:27.466411    6881 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 10:36:27.469716    6881 config.go:182] Loaded profile config "functional-863000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:36:27.469761    6881 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:36:27.474326    6881 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 10:36:27.481385    6881 start.go:297] selected driver: qemu2
	I0729 10:36:27.481391    6881 start.go:901] validating driver "qemu2" against &{Name:functional-863000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-863000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:36:27.481451    6881 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:36:27.483706    6881 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 10:36:27.483796    6881 cni.go:84] Creating CNI manager for ""
	I0729 10:36:27.483803    6881 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 10:36:27.483855    6881 start.go:340] cluster config:
	{Name:functional-863000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-863000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:36:27.487286    6881 iso.go:125] acquiring lock: {Name:mk2808e0b9510c77af2c0862d3450f3cc996acba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:36:27.498372    6881 out.go:177] * Starting "functional-863000" primary control-plane node in "functional-863000" cluster
	I0729 10:36:27.504321    6881 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 10:36:27.504336    6881 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 10:36:27.504346    6881 cache.go:56] Caching tarball of preloaded images
	I0729 10:36:27.504409    6881 preload.go:172] Found /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 10:36:27.504417    6881 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 10:36:27.504470    6881 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/functional-863000/config.json ...
	I0729 10:36:27.504758    6881 start.go:360] acquireMachinesLock for functional-863000: {Name:mk9e50f97d2386769ca02400541d9347efa17b63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:36:27.504790    6881 start.go:364] duration metric: took 27.792µs to acquireMachinesLock for "functional-863000"
	I0729 10:36:27.504798    6881 start.go:96] Skipping create...Using existing machine configuration
	I0729 10:36:27.504803    6881 fix.go:54] fixHost starting: 
	I0729 10:36:27.504916    6881 fix.go:112] recreateIfNeeded on functional-863000: state=Stopped err=<nil>
	W0729 10:36:27.504922    6881 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 10:36:27.515386    6881 out.go:177] * Restarting existing qemu2 VM for "functional-863000" ...
	I0729 10:36:27.522468    6881 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:36:27.522504    6881 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/functional-863000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/functional-863000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/functional-863000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:1b:b5:a1:ba:77 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/functional-863000/disk.qcow2
	I0729 10:36:27.524449    6881 main.go:141] libmachine: STDOUT: 
	I0729 10:36:27.524464    6881 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:36:27.524491    6881 fix.go:56] duration metric: took 19.68825ms for fixHost
	I0729 10:36:27.524495    6881 start.go:83] releasing machines lock for "functional-863000", held for 19.702292ms
	W0729 10:36:27.524498    6881 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 10:36:27.524523    6881 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:36:27.524527    6881 start.go:729] Will try again in 5 seconds ...
	I0729 10:36:32.526167    6881 start.go:360] acquireMachinesLock for functional-863000: {Name:mk9e50f97d2386769ca02400541d9347efa17b63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:36:32.526560    6881 start.go:364] duration metric: took 316.666µs to acquireMachinesLock for "functional-863000"
	I0729 10:36:32.526686    6881 start.go:96] Skipping create...Using existing machine configuration
	I0729 10:36:32.526695    6881 fix.go:54] fixHost starting: 
	I0729 10:36:32.527418    6881 fix.go:112] recreateIfNeeded on functional-863000: state=Stopped err=<nil>
	W0729 10:36:32.527436    6881 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 10:36:32.534981    6881 out.go:177] * Restarting existing qemu2 VM for "functional-863000" ...
	I0729 10:36:32.537875    6881 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:36:32.538136    6881 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/functional-863000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/functional-863000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/functional-863000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:1b:b5:a1:ba:77 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/functional-863000/disk.qcow2
	I0729 10:36:32.547312    6881 main.go:141] libmachine: STDOUT: 
	I0729 10:36:32.547355    6881 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:36:32.547431    6881 fix.go:56] duration metric: took 20.736708ms for fixHost
	I0729 10:36:32.547445    6881 start.go:83] releasing machines lock for "functional-863000", held for 20.871292ms
	W0729 10:36:32.547595    6881 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-863000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:36:32.555814    6881 out.go:177] 
	W0729 10:36:32.559881    6881 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 10:36:32.559910    6881 out.go:239] * 
	W0729 10:36:32.562492    6881 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 10:36:32.569807    6881 out.go:177] 
	
	
	* The control-plane node functional-863000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-863000"

                                                
                                                
-- /stdout --
functional_test.go:1234: out/minikube-darwin-arm64 -p functional-863000 logs failed: exit status 83
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-403000 | jenkins | v1.33.1 | 29 Jul 24 10:34 PDT |                     |
|         | -p download-only-403000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT | 29 Jul 24 10:35 PDT |
| delete  | -p download-only-403000                                                  | download-only-403000 | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT | 29 Jul 24 10:35 PDT |
| start   | -o=json --download-only                                                  | download-only-310000 | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT |                     |
|         | -p download-only-310000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.30.3                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT | 29 Jul 24 10:35 PDT |
| delete  | -p download-only-310000                                                  | download-only-310000 | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT | 29 Jul 24 10:35 PDT |
| start   | -o=json --download-only                                                  | download-only-732000 | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT |                     |
|         | -p download-only-732000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.31.0-beta.0                                      |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT | 29 Jul 24 10:35 PDT |
| delete  | -p download-only-732000                                                  | download-only-732000 | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT | 29 Jul 24 10:35 PDT |
| delete  | -p download-only-403000                                                  | download-only-403000 | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT | 29 Jul 24 10:35 PDT |
| delete  | -p download-only-310000                                                  | download-only-310000 | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT | 29 Jul 24 10:35 PDT |
| delete  | -p download-only-732000                                                  | download-only-732000 | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT | 29 Jul 24 10:35 PDT |
| start   | --download-only -p                                                       | binary-mirror-184000 | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT |                     |
|         | binary-mirror-184000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:51031                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-184000                                                  | binary-mirror-184000 | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT | 29 Jul 24 10:35 PDT |
| addons  | enable dashboard -p                                                      | addons-166000        | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT |                     |
|         | addons-166000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-166000        | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT |                     |
|         | addons-166000                                                            |                      |         |         |                     |                     |
| start   | -p addons-166000 --wait=true                                             | addons-166000        | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-166000                                                         | addons-166000        | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT | 29 Jul 24 10:35 PDT |
| start   | -p nospam-634000 -n=1 --memory=2250 --wait=false                         | nospam-634000        | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-634000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-634000 --log_dir                                                  | nospam-634000        | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-634000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-634000 --log_dir                                                  | nospam-634000        | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-634000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-634000 --log_dir                                                  | nospam-634000        | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-634000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-634000 --log_dir                                                  | nospam-634000        | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-634000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-634000 --log_dir                                                  | nospam-634000        | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-634000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-634000 --log_dir                                                  | nospam-634000        | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-634000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-634000 --log_dir                                                  | nospam-634000        | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-634000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-634000 --log_dir                                                  | nospam-634000        | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-634000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-634000 --log_dir                                                  | nospam-634000        | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-634000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-634000 --log_dir                                                  | nospam-634000        | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT | 29 Jul 24 10:35 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-634000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-634000 --log_dir                                                  | nospam-634000        | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT | 29 Jul 24 10:36 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-634000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-634000 --log_dir                                                  | nospam-634000        | jenkins | v1.33.1 | 29 Jul 24 10:36 PDT | 29 Jul 24 10:36 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-634000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-634000                                                         | nospam-634000        | jenkins | v1.33.1 | 29 Jul 24 10:36 PDT | 29 Jul 24 10:36 PDT |
| start   | -p functional-863000                                                     | functional-863000    | jenkins | v1.33.1 | 29 Jul 24 10:36 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-863000                                                     | functional-863000    | jenkins | v1.33.1 | 29 Jul 24 10:36 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-863000 cache add                                              | functional-863000    | jenkins | v1.33.1 | 29 Jul 24 10:36 PDT | 29 Jul 24 10:36 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-863000 cache add                                              | functional-863000    | jenkins | v1.33.1 | 29 Jul 24 10:36 PDT | 29 Jul 24 10:36 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-863000 cache add                                              | functional-863000    | jenkins | v1.33.1 | 29 Jul 24 10:36 PDT | 29 Jul 24 10:36 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-863000 cache add                                              | functional-863000    | jenkins | v1.33.1 | 29 Jul 24 10:36 PDT | 29 Jul 24 10:36 PDT |
|         | minikube-local-cache-test:functional-863000                              |                      |         |         |                     |                     |
| cache   | functional-863000 cache delete                                           | functional-863000    | jenkins | v1.33.1 | 29 Jul 24 10:36 PDT | 29 Jul 24 10:36 PDT |
|         | minikube-local-cache-test:functional-863000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 29 Jul 24 10:36 PDT | 29 Jul 24 10:36 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 29 Jul 24 10:36 PDT | 29 Jul 24 10:36 PDT |
| ssh     | functional-863000 ssh sudo                                               | functional-863000    | jenkins | v1.33.1 | 29 Jul 24 10:36 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-863000                                                        | functional-863000    | jenkins | v1.33.1 | 29 Jul 24 10:36 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-863000 ssh                                                    | functional-863000    | jenkins | v1.33.1 | 29 Jul 24 10:36 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-863000 cache reload                                           | functional-863000    | jenkins | v1.33.1 | 29 Jul 24 10:36 PDT | 29 Jul 24 10:36 PDT |
| ssh     | functional-863000 ssh                                                    | functional-863000    | jenkins | v1.33.1 | 29 Jul 24 10:36 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 29 Jul 24 10:36 PDT | 29 Jul 24 10:36 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 29 Jul 24 10:36 PDT | 29 Jul 24 10:36 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-863000 kubectl --                                             | functional-863000    | jenkins | v1.33.1 | 29 Jul 24 10:36 PDT |                     |
|         | --context functional-863000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-863000                                                     | functional-863000    | jenkins | v1.33.1 | 29 Jul 24 10:36 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/07/29 10:36:27
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.22.5 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0729 10:36:27.415612    6881 out.go:291] Setting OutFile to fd 1 ...
I0729 10:36:27.415734    6881 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 10:36:27.415736    6881 out.go:304] Setting ErrFile to fd 2...
I0729 10:36:27.415738    6881 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 10:36:27.415882    6881 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
I0729 10:36:27.416895    6881 out.go:298] Setting JSON to false
I0729 10:36:27.432902    6881 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3956,"bootTime":1722270631,"procs":447,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W0729 10:36:27.433002    6881 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0729 10:36:27.437518    6881 out.go:177] * [functional-863000] minikube v1.33.1 on Darwin 14.5 (arm64)
I0729 10:36:27.446333    6881 out.go:177]   - MINIKUBE_LOCATION=19339
I0729 10:36:27.446381    6881 notify.go:220] Checking for updates...
I0729 10:36:27.454231    6881 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
I0729 10:36:27.457360    6881 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0729 10:36:27.460393    6881 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0729 10:36:27.463400    6881 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
I0729 10:36:27.466411    6881 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0729 10:36:27.469716    6881 config.go:182] Loaded profile config "functional-863000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 10:36:27.469761    6881 driver.go:392] Setting default libvirt URI to qemu:///system
I0729 10:36:27.474326    6881 out.go:177] * Using the qemu2 driver based on existing profile
I0729 10:36:27.481385    6881 start.go:297] selected driver: qemu2
I0729 10:36:27.481391    6881 start.go:901] validating driver "qemu2" against &{Name:functional-863000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:functional-863000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0729 10:36:27.481451    6881 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0729 10:36:27.483706    6881 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0729 10:36:27.483796    6881 cni.go:84] Creating CNI manager for ""
I0729 10:36:27.483803    6881 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0729 10:36:27.483855    6881 start.go:340] cluster config:
{Name:functional-863000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-863000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0729 10:36:27.487286    6881 iso.go:125] acquiring lock: {Name:mk2808e0b9510c77af2c0862d3450f3cc996acba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0729 10:36:27.498372    6881 out.go:177] * Starting "functional-863000" primary control-plane node in "functional-863000" cluster
I0729 10:36:27.504321    6881 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
I0729 10:36:27.504336    6881 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
I0729 10:36:27.504346    6881 cache.go:56] Caching tarball of preloaded images
I0729 10:36:27.504409    6881 preload.go:172] Found /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0729 10:36:27.504417    6881 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
I0729 10:36:27.504470    6881 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/functional-863000/config.json ...
I0729 10:36:27.504758    6881 start.go:360] acquireMachinesLock for functional-863000: {Name:mk9e50f97d2386769ca02400541d9347efa17b63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0729 10:36:27.504790    6881 start.go:364] duration metric: took 27.792µs to acquireMachinesLock for "functional-863000"
I0729 10:36:27.504798    6881 start.go:96] Skipping create...Using existing machine configuration
I0729 10:36:27.504803    6881 fix.go:54] fixHost starting: 
I0729 10:36:27.504916    6881 fix.go:112] recreateIfNeeded on functional-863000: state=Stopped err=<nil>
W0729 10:36:27.504922    6881 fix.go:138] unexpected machine state, will restart: <nil>
I0729 10:36:27.515386    6881 out.go:177] * Restarting existing qemu2 VM for "functional-863000" ...
I0729 10:36:27.522468    6881 qemu.go:418] Using hvf for hardware acceleration
I0729 10:36:27.522504    6881 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/functional-863000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/functional-863000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/functional-863000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:1b:b5:a1:ba:77 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/functional-863000/disk.qcow2
I0729 10:36:27.524449    6881 main.go:141] libmachine: STDOUT: 
I0729 10:36:27.524464    6881 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0729 10:36:27.524491    6881 fix.go:56] duration metric: took 19.68825ms for fixHost
I0729 10:36:27.524495    6881 start.go:83] releasing machines lock for "functional-863000", held for 19.702292ms
W0729 10:36:27.524498    6881 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0729 10:36:27.524523    6881 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0729 10:36:27.524527    6881 start.go:729] Will try again in 5 seconds ...
I0729 10:36:32.526167    6881 start.go:360] acquireMachinesLock for functional-863000: {Name:mk9e50f97d2386769ca02400541d9347efa17b63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0729 10:36:32.526560    6881 start.go:364] duration metric: took 316.666µs to acquireMachinesLock for "functional-863000"
I0729 10:36:32.526686    6881 start.go:96] Skipping create...Using existing machine configuration
I0729 10:36:32.526695    6881 fix.go:54] fixHost starting: 
I0729 10:36:32.527418    6881 fix.go:112] recreateIfNeeded on functional-863000: state=Stopped err=<nil>
W0729 10:36:32.527436    6881 fix.go:138] unexpected machine state, will restart: <nil>
I0729 10:36:32.534981    6881 out.go:177] * Restarting existing qemu2 VM for "functional-863000" ...
I0729 10:36:32.537875    6881 qemu.go:418] Using hvf for hardware acceleration
I0729 10:36:32.538136    6881 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/functional-863000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/functional-863000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/functional-863000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:1b:b5:a1:ba:77 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/functional-863000/disk.qcow2
I0729 10:36:32.547312    6881 main.go:141] libmachine: STDOUT: 
I0729 10:36:32.547355    6881 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0729 10:36:32.547431    6881 fix.go:56] duration metric: took 20.736708ms for fixHost
I0729 10:36:32.547445    6881 start.go:83] releasing machines lock for "functional-863000", held for 20.871292ms
W0729 10:36:32.547595    6881 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-863000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0729 10:36:32.555814    6881 out.go:177] 
W0729 10:36:32.559881    6881 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0729 10:36:32.559910    6881 out.go:239] * 
W0729 10:36:32.562492    6881 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0729 10:36:32.569807    6881 out.go:177] 

                                                
                                                

                                                
                                                
* The control-plane node functional-863000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-863000"
***
--- FAIL: TestFunctional/serial/LogsCmd (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd2319351360/001/logs.txt
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-403000 | jenkins | v1.33.1 | 29 Jul 24 10:34 PDT |                     |
|         | -p download-only-403000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT | 29 Jul 24 10:35 PDT |
| delete  | -p download-only-403000                                                  | download-only-403000 | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT | 29 Jul 24 10:35 PDT |
| start   | -o=json --download-only                                                  | download-only-310000 | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT |                     |
|         | -p download-only-310000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.30.3                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT | 29 Jul 24 10:35 PDT |
| delete  | -p download-only-310000                                                  | download-only-310000 | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT | 29 Jul 24 10:35 PDT |
| start   | -o=json --download-only                                                  | download-only-732000 | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT |                     |
|         | -p download-only-732000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.31.0-beta.0                                      |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT | 29 Jul 24 10:35 PDT |
| delete  | -p download-only-732000                                                  | download-only-732000 | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT | 29 Jul 24 10:35 PDT |
| delete  | -p download-only-403000                                                  | download-only-403000 | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT | 29 Jul 24 10:35 PDT |
| delete  | -p download-only-310000                                                  | download-only-310000 | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT | 29 Jul 24 10:35 PDT |
| delete  | -p download-only-732000                                                  | download-only-732000 | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT | 29 Jul 24 10:35 PDT |
| start   | --download-only -p                                                       | binary-mirror-184000 | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT |                     |
|         | binary-mirror-184000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:51031                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-184000                                                  | binary-mirror-184000 | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT | 29 Jul 24 10:35 PDT |
| addons  | enable dashboard -p                                                      | addons-166000        | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT |                     |
|         | addons-166000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-166000        | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT |                     |
|         | addons-166000                                                            |                      |         |         |                     |                     |
| start   | -p addons-166000 --wait=true                                             | addons-166000        | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-166000                                                         | addons-166000        | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT | 29 Jul 24 10:35 PDT |
| start   | -p nospam-634000 -n=1 --memory=2250 --wait=false                         | nospam-634000        | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-634000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-634000 --log_dir                                                  | nospam-634000        | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-634000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-634000 --log_dir                                                  | nospam-634000        | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-634000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-634000 --log_dir                                                  | nospam-634000        | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-634000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-634000 --log_dir                                                  | nospam-634000        | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-634000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-634000 --log_dir                                                  | nospam-634000        | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-634000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-634000 --log_dir                                                  | nospam-634000        | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-634000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-634000 --log_dir                                                  | nospam-634000        | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-634000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-634000 --log_dir                                                  | nospam-634000        | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-634000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-634000 --log_dir                                                  | nospam-634000        | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-634000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-634000 --log_dir                                                  | nospam-634000        | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT | 29 Jul 24 10:35 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-634000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-634000 --log_dir                                                  | nospam-634000        | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT | 29 Jul 24 10:36 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-634000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-634000 --log_dir                                                  | nospam-634000        | jenkins | v1.33.1 | 29 Jul 24 10:36 PDT | 29 Jul 24 10:36 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-634000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-634000                                                         | nospam-634000        | jenkins | v1.33.1 | 29 Jul 24 10:36 PDT | 29 Jul 24 10:36 PDT |
| start   | -p functional-863000                                                     | functional-863000    | jenkins | v1.33.1 | 29 Jul 24 10:36 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-863000                                                     | functional-863000    | jenkins | v1.33.1 | 29 Jul 24 10:36 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-863000 cache add                                              | functional-863000    | jenkins | v1.33.1 | 29 Jul 24 10:36 PDT | 29 Jul 24 10:36 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-863000 cache add                                              | functional-863000    | jenkins | v1.33.1 | 29 Jul 24 10:36 PDT | 29 Jul 24 10:36 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-863000 cache add                                              | functional-863000    | jenkins | v1.33.1 | 29 Jul 24 10:36 PDT | 29 Jul 24 10:36 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-863000 cache add                                              | functional-863000    | jenkins | v1.33.1 | 29 Jul 24 10:36 PDT | 29 Jul 24 10:36 PDT |
|         | minikube-local-cache-test:functional-863000                              |                      |         |         |                     |                     |
| cache   | functional-863000 cache delete                                           | functional-863000    | jenkins | v1.33.1 | 29 Jul 24 10:36 PDT | 29 Jul 24 10:36 PDT |
|         | minikube-local-cache-test:functional-863000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 29 Jul 24 10:36 PDT | 29 Jul 24 10:36 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 29 Jul 24 10:36 PDT | 29 Jul 24 10:36 PDT |
| ssh     | functional-863000 ssh sudo                                               | functional-863000    | jenkins | v1.33.1 | 29 Jul 24 10:36 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-863000                                                        | functional-863000    | jenkins | v1.33.1 | 29 Jul 24 10:36 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-863000 ssh                                                    | functional-863000    | jenkins | v1.33.1 | 29 Jul 24 10:36 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-863000 cache reload                                           | functional-863000    | jenkins | v1.33.1 | 29 Jul 24 10:36 PDT | 29 Jul 24 10:36 PDT |
| ssh     | functional-863000 ssh                                                    | functional-863000    | jenkins | v1.33.1 | 29 Jul 24 10:36 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 29 Jul 24 10:36 PDT | 29 Jul 24 10:36 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 29 Jul 24 10:36 PDT | 29 Jul 24 10:36 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-863000 kubectl --                                             | functional-863000    | jenkins | v1.33.1 | 29 Jul 24 10:36 PDT |                     |
|         | --context functional-863000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-863000                                                     | functional-863000    | jenkins | v1.33.1 | 29 Jul 24 10:36 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/07/29 10:36:27
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.22.5 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0729 10:36:27.415612    6881 out.go:291] Setting OutFile to fd 1 ...
I0729 10:36:27.415734    6881 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 10:36:27.415736    6881 out.go:304] Setting ErrFile to fd 2...
I0729 10:36:27.415738    6881 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 10:36:27.415882    6881 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
I0729 10:36:27.416895    6881 out.go:298] Setting JSON to false
I0729 10:36:27.432902    6881 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3956,"bootTime":1722270631,"procs":447,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W0729 10:36:27.433002    6881 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0729 10:36:27.437518    6881 out.go:177] * [functional-863000] minikube v1.33.1 on Darwin 14.5 (arm64)
I0729 10:36:27.446333    6881 out.go:177]   - MINIKUBE_LOCATION=19339
I0729 10:36:27.446381    6881 notify.go:220] Checking for updates...
I0729 10:36:27.454231    6881 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
I0729 10:36:27.457360    6881 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0729 10:36:27.460393    6881 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0729 10:36:27.463400    6881 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
I0729 10:36:27.466411    6881 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0729 10:36:27.469716    6881 config.go:182] Loaded profile config "functional-863000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 10:36:27.469761    6881 driver.go:392] Setting default libvirt URI to qemu:///system
I0729 10:36:27.474326    6881 out.go:177] * Using the qemu2 driver based on existing profile
I0729 10:36:27.481385    6881 start.go:297] selected driver: qemu2
I0729 10:36:27.481391    6881 start.go:901] validating driver "qemu2" against &{Name:functional-863000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:functional-863000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0729 10:36:27.481451    6881 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0729 10:36:27.483706    6881 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0729 10:36:27.483796    6881 cni.go:84] Creating CNI manager for ""
I0729 10:36:27.483803    6881 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0729 10:36:27.483855    6881 start.go:340] cluster config:
{Name:functional-863000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-863000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0729 10:36:27.487286    6881 iso.go:125] acquiring lock: {Name:mk2808e0b9510c77af2c0862d3450f3cc996acba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0729 10:36:27.498372    6881 out.go:177] * Starting "functional-863000" primary control-plane node in "functional-863000" cluster
I0729 10:36:27.504321    6881 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
I0729 10:36:27.504336    6881 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
I0729 10:36:27.504346    6881 cache.go:56] Caching tarball of preloaded images
I0729 10:36:27.504409    6881 preload.go:172] Found /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0729 10:36:27.504417    6881 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
I0729 10:36:27.504470    6881 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/functional-863000/config.json ...
I0729 10:36:27.504758    6881 start.go:360] acquireMachinesLock for functional-863000: {Name:mk9e50f97d2386769ca02400541d9347efa17b63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0729 10:36:27.504790    6881 start.go:364] duration metric: took 27.792µs to acquireMachinesLock for "functional-863000"
I0729 10:36:27.504798    6881 start.go:96] Skipping create...Using existing machine configuration
I0729 10:36:27.504803    6881 fix.go:54] fixHost starting: 
I0729 10:36:27.504916    6881 fix.go:112] recreateIfNeeded on functional-863000: state=Stopped err=<nil>
W0729 10:36:27.504922    6881 fix.go:138] unexpected machine state, will restart: <nil>
I0729 10:36:27.515386    6881 out.go:177] * Restarting existing qemu2 VM for "functional-863000" ...
I0729 10:36:27.522468    6881 qemu.go:418] Using hvf for hardware acceleration
I0729 10:36:27.522504    6881 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/functional-863000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/functional-863000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/functional-863000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:1b:b5:a1:ba:77 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/functional-863000/disk.qcow2
I0729 10:36:27.524449    6881 main.go:141] libmachine: STDOUT: 
I0729 10:36:27.524464    6881 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0729 10:36:27.524491    6881 fix.go:56] duration metric: took 19.68825ms for fixHost
I0729 10:36:27.524495    6881 start.go:83] releasing machines lock for "functional-863000", held for 19.702292ms
W0729 10:36:27.524498    6881 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0729 10:36:27.524523    6881 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0729 10:36:27.524527    6881 start.go:729] Will try again in 5 seconds ...
I0729 10:36:32.526167    6881 start.go:360] acquireMachinesLock for functional-863000: {Name:mk9e50f97d2386769ca02400541d9347efa17b63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0729 10:36:32.526560    6881 start.go:364] duration metric: took 316.666µs to acquireMachinesLock for "functional-863000"
I0729 10:36:32.526686    6881 start.go:96] Skipping create...Using existing machine configuration
I0729 10:36:32.526695    6881 fix.go:54] fixHost starting: 
I0729 10:36:32.527418    6881 fix.go:112] recreateIfNeeded on functional-863000: state=Stopped err=<nil>
W0729 10:36:32.527436    6881 fix.go:138] unexpected machine state, will restart: <nil>
I0729 10:36:32.534981    6881 out.go:177] * Restarting existing qemu2 VM for "functional-863000" ...
I0729 10:36:32.537875    6881 qemu.go:418] Using hvf for hardware acceleration
I0729 10:36:32.538136    6881 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/functional-863000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/functional-863000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/functional-863000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:1b:b5:a1:ba:77 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/functional-863000/disk.qcow2
I0729 10:36:32.547312    6881 main.go:141] libmachine: STDOUT: 
I0729 10:36:32.547355    6881 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0729 10:36:32.547431    6881 fix.go:56] duration metric: took 20.736708ms for fixHost
I0729 10:36:32.547445    6881 start.go:83] releasing machines lock for "functional-863000", held for 20.871292ms
W0729 10:36:32.547595    6881 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-863000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0729 10:36:32.555814    6881 out.go:177] 
W0729 10:36:32.559881    6881 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0729 10:36:32.559910    6881 out.go:239] * 
W0729 10:36:32.562492    6881 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0729 10:36:32.569807    6881 out.go:177] 

                                                
                                                

                                                
                                                
***
--- FAIL: TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-863000 apply -f testdata/invalidsvc.yaml
functional_test.go:2317: (dbg) Non-zero exit: kubectl --context functional-863000 apply -f testdata/invalidsvc.yaml: exit status 1 (27.792542ms)

                                                
                                                
** stderr ** 
	error: context "functional-863000" does not exist

                                                
                                                
** /stderr **
functional_test.go:2319: kubectl --context functional-863000 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-863000 --alsologtostderr -v=1]
functional_test.go:914: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-863000 --alsologtostderr -v=1] ...
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-863000 --alsologtostderr -v=1] stdout:
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-863000 --alsologtostderr -v=1] stderr:
I0729 10:37:11.333542    7076 out.go:291] Setting OutFile to fd 1 ...
I0729 10:37:11.333968    7076 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 10:37:11.333972    7076 out.go:304] Setting ErrFile to fd 2...
I0729 10:37:11.333975    7076 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 10:37:11.334128    7076 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
I0729 10:37:11.334338    7076 mustload.go:65] Loading cluster: functional-863000
I0729 10:37:11.334535    7076 config.go:182] Loaded profile config "functional-863000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 10:37:11.339016    7076 out.go:177] * The control-plane node functional-863000 host is not running: state=Stopped
I0729 10:37:11.342901    7076 out.go:177]   To start a cluster, run: "minikube start -p functional-863000"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-863000 -n functional-863000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-863000 -n functional-863000: exit status 7 (41.599166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-863000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/DashboardCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 status
functional_test.go:850: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-863000 status: exit status 7 (75.250583ms)

                                                
                                                
-- stdout --
	functional-863000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
functional_test.go:852: failed to run minikube status. args "out/minikube-darwin-arm64 -p functional-863000 status" : exit status 7
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-863000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 7 (34.164583ms)

                                                
                                                
-- stdout --
	host:Stopped,kublet:Stopped,apiserver:Stopped,kubeconfig:Stopped

                                                
                                                
-- /stdout --
functional_test.go:858: failed to run minikube status with custom format: args "out/minikube-darwin-arm64 -p functional-863000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 7
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 status -o json
functional_test.go:868: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-863000 status -o json: exit status 7 (30.258292ms)

                                                
                                                
-- stdout --
	{"Name":"functional-863000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:870: failed to run minikube status with json output. args "out/minikube-darwin-arm64 -p functional-863000 status -o json" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-863000 -n functional-863000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-863000 -n functional-863000: exit status 7 (29.311792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-863000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-863000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1623: (dbg) Non-zero exit: kubectl --context functional-863000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.462333ms)

                                                
                                                
** stderr ** 
	error: context "functional-863000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1629: failed to create hello-node deployment with this command "kubectl --context functional-863000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
functional_test.go:1594: service test failed - dumping debug information
functional_test.go:1595: -----------------------service failure post-mortem--------------------------------
functional_test.go:1598: (dbg) Run:  kubectl --context functional-863000 describe po hello-node-connect
functional_test.go:1598: (dbg) Non-zero exit: kubectl --context functional-863000 describe po hello-node-connect: exit status 1 (26.339167ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-863000

                                                
                                                
** /stderr **
functional_test.go:1600: "kubectl --context functional-863000 describe po hello-node-connect" failed: exit status 1
functional_test.go:1602: hello-node pod describe:
functional_test.go:1604: (dbg) Run:  kubectl --context functional-863000 logs -l app=hello-node-connect
functional_test.go:1604: (dbg) Non-zero exit: kubectl --context functional-863000 logs -l app=hello-node-connect: exit status 1 (26.69875ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-863000

                                                
                                                
** /stderr **
functional_test.go:1606: "kubectl --context functional-863000 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1608: hello-node logs:
functional_test.go:1610: (dbg) Run:  kubectl --context functional-863000 describe svc hello-node-connect
functional_test.go:1610: (dbg) Non-zero exit: kubectl --context functional-863000 describe svc hello-node-connect: exit status 1 (27.891375ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-863000

                                                
                                                
** /stderr **
functional_test.go:1612: "kubectl --context functional-863000 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1614: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-863000 -n functional-863000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-863000 -n functional-863000: exit status 7 (29.758417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-863000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: client config: context "functional-863000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-863000 -n functional-863000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-863000 -n functional-863000: exit status 7 (33.363667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-863000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 ssh "echo hello"
functional_test.go:1721: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-863000 ssh "echo hello": exit status 83 (57.852375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-863000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-863000"

                                                
                                                
-- /stdout --
functional_test.go:1726: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-863000 ssh \"echo hello\"" : exit status 83
functional_test.go:1730: expected minikube ssh command output to be -"hello"- but got *"* The control-plane node functional-863000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-863000\"\n"*. args "out/minikube-darwin-arm64 -p functional-863000 ssh \"echo hello\""
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 ssh "cat /etc/hostname"
functional_test.go:1738: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-863000 ssh "cat /etc/hostname": exit status 83 (50.769541ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-863000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-863000"

                                                
                                                
-- /stdout --
functional_test.go:1744: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-863000 ssh \"cat /etc/hostname\"" : exit status 83
functional_test.go:1748: expected minikube ssh command output to be -"functional-863000"- but got *"* The control-plane node functional-863000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-863000\"\n"*. args "out/minikube-darwin-arm64 -p functional-863000 ssh \"cat /etc/hostname\""
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-863000 -n functional-863000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-863000 -n functional-863000: exit status 7 (29.158ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-863000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/SSHCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-863000 cp testdata/cp-test.txt /home/docker/cp-test.txt: exit status 83 (54.41575ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-863000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-863000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-863000 cp testdata/cp-test.txt /home/docker/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 ssh -n functional-863000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-863000 ssh -n functional-863000 "sudo cat /home/docker/cp-test.txt": exit status 83 (42.971542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-863000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-863000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-863000 ssh -n functional-863000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-863000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-863000\"\n",
  }, "")
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 cp functional-863000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd3476312870/001/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-863000 cp functional-863000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd3476312870/001/cp-test.txt: exit status 83 (40.522208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-863000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-863000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-863000 cp functional-863000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd3476312870/001/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 ssh -n functional-863000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-863000 ssh -n functional-863000 "sudo cat /home/docker/cp-test.txt": exit status 83 (41.924917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-863000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-863000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-863000 ssh -n functional-863000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:528: failed to read test file 'testdata/cp-test.txt' : open /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd3476312870/001/cp-test.txt: no such file or directory
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  string(
- 	"* The control-plane node functional-863000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-863000\"\n",
+ 	"",
  )
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-863000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt: exit status 83 (45.005917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-863000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-863000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-863000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 ssh -n functional-863000 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-863000 ssh -n functional-863000 "sudo cat /tmp/does/not/exist/cp-test.txt": exit status 83 (52.834041ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-863000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-863000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-863000 ssh -n functional-863000 \"sudo cat /tmp/does/not/exist/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-863000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-863000\"\n",
  }, "")
--- FAIL: TestFunctional/parallel/CpCmd (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/6543/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 ssh "sudo cat /etc/test/nested/copy/6543/hosts"
functional_test.go:1927: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-863000 ssh "sudo cat /etc/test/nested/copy/6543/hosts": exit status 83 (45.604208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-863000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-863000"

                                                
                                                
-- /stdout --
functional_test.go:1929: out/minikube-darwin-arm64 -p functional-863000 ssh "sudo cat /etc/test/nested/copy/6543/hosts" failed: exit status 83
functional_test.go:1932: file sync test content: * The control-plane node functional-863000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-863000"
functional_test.go:1942: /etc/sync.test content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file sync process",
+ 	"he control-plane node functional-863000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-863000\"\n",
  }, "")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-863000 -n functional-863000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-863000 -n functional-863000: exit status 7 (29.118458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-863000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/FileSync (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/6543.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 ssh "sudo cat /etc/ssl/certs/6543.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-863000 ssh "sudo cat /etc/ssl/certs/6543.pem": exit status 83 (42.675125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-863000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-863000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/6543.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-863000 ssh \"sudo cat /etc/ssl/certs/6543.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/6543.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-863000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-863000"
  	"""
  )
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/6543.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 ssh "sudo cat /usr/share/ca-certificates/6543.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-863000 ssh "sudo cat /usr/share/ca-certificates/6543.pem": exit status 83 (40.655541ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-863000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-863000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/usr/share/ca-certificates/6543.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-863000 ssh \"sudo cat /usr/share/ca-certificates/6543.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /usr/share/ca-certificates/6543.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-863000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-863000"
  	"""
  )
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-863000 ssh "sudo cat /etc/ssl/certs/51391683.0": exit status 83 (39.818625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-863000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-863000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/51391683.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-863000 ssh \"sudo cat /etc/ssl/certs/51391683.0\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/51391683.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-863000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-863000"
  	"""
  )
functional_test.go:1995: Checking for existence of /etc/ssl/certs/65432.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 ssh "sudo cat /etc/ssl/certs/65432.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-863000 ssh "sudo cat /etc/ssl/certs/65432.pem": exit status 83 (39.554583ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-863000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-863000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/65432.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-863000 ssh \"sudo cat /etc/ssl/certs/65432.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/65432.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-863000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-863000"
  	"""
  )
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/65432.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 ssh "sudo cat /usr/share/ca-certificates/65432.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-863000 ssh "sudo cat /usr/share/ca-certificates/65432.pem": exit status 83 (39.629666ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-863000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-863000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/usr/share/ca-certificates/65432.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-863000 ssh \"sudo cat /usr/share/ca-certificates/65432.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /usr/share/ca-certificates/65432.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-863000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-863000"
  	"""
  )
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-863000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": exit status 83 (42.817792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-863000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-863000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/3ec20f2e.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-863000 ssh \"sudo cat /etc/ssl/certs/3ec20f2e.0\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/3ec20f2e.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-863000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-863000"
  	"""
  )
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-863000 -n functional-863000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-863000 -n functional-863000: exit status 7 (28.898917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-863000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/CertSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-863000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:218: (dbg) Non-zero exit: kubectl --context functional-863000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (26.45725ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-863000

                                                
                                                
** /stderr **
functional_test.go:220: failed to 'kubectl get nodes' with args "kubectl --context functional-863000 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:226: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-863000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-863000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-863000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-863000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-863000

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-863000 -n functional-863000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-863000 -n functional-863000: exit status 7 (29.045667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-863000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-863000 ssh "sudo systemctl is-active crio": exit status 83 (40.32725ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-863000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-863000"

                                                
                                                
-- /stdout --
functional_test.go:2026: output of 
-- stdout --
	* The control-plane node functional-863000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-863000"

                                                
                                                
-- /stdout --: exit status 83
functional_test.go:2029: For runtime "docker": expected "crio" to be inactive but got "* The control-plane node functional-863000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-863000\"\n" 
--- FAIL: TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-863000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-863000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 83. stderr: I0729 10:36:33.215050    6929 out.go:291] Setting OutFile to fd 1 ...
I0729 10:36:33.216200    6929 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 10:36:33.216205    6929 out.go:304] Setting ErrFile to fd 2...
I0729 10:36:33.216207    6929 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 10:36:33.216342    6929 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
I0729 10:36:33.220065    6929 mustload.go:65] Loading cluster: functional-863000
I0729 10:36:33.220265    6929 config.go:182] Loaded profile config "functional-863000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 10:36:33.225954    6929 out.go:177] * The control-plane node functional-863000 host is not running: state=Stopped
I0729 10:36:33.232939    6929 out.go:177]   To start a cluster, run: "minikube start -p functional-863000"

                                                
                                                
stdout: * The control-plane node functional-863000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-863000"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-863000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-863000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-863000 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-863000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 6928: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-863000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-863000 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:208: failed to get Kubernetes client for "functional-863000": client config: context "functional-863000" does not exist
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (100.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-863000 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-863000 get svc nginx-svc: exit status 1 (68.652708ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-863000

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-863000 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (100.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-863000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1433: (dbg) Non-zero exit: kubectl --context functional-863000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.337041ms)

                                                
                                                
** stderr ** 
	error: context "functional-863000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1439: failed to create hello-node deployment with this command "kubectl --context functional-863000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 service list
functional_test.go:1455: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-863000 service list: exit status 83 (40.8685ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-863000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-863000"

                                                
                                                
-- /stdout --
functional_test.go:1457: failed to do service list. args "out/minikube-darwin-arm64 -p functional-863000 service list" : exit status 83
functional_test.go:1460: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-863000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-863000\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 service list -o json
functional_test.go:1485: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-863000 service list -o json: exit status 83 (41.676375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-863000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-863000"

                                                
                                                
-- /stdout --
functional_test.go:1487: failed to list services with json format. args "out/minikube-darwin-arm64 -p functional-863000 service list -o json": exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-863000 service --namespace=default --https --url hello-node: exit status 83 (40.91275ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-863000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-863000"

                                                
                                                
-- /stdout --
functional_test.go:1507: failed to get service url. args "out/minikube-darwin-arm64 -p functional-863000 service --namespace=default --https --url hello-node" : exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-863000 service hello-node --url --format={{.IP}}: exit status 83 (40.979708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-863000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-863000"

                                                
                                                
-- /stdout --
functional_test.go:1538: failed to get service url with custom format. args "out/minikube-darwin-arm64 -p functional-863000 service hello-node --url --format={{.IP}}": exit status 83
functional_test.go:1544: "* The control-plane node functional-863000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-863000\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-863000 service hello-node --url: exit status 83 (41.500208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-863000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-863000"

                                                
                                                
-- /stdout --
functional_test.go:1557: failed to get service url. args: "out/minikube-darwin-arm64 -p functional-863000 service hello-node --url": exit status 83
functional_test.go:1561: found endpoint for hello-node: * The control-plane node functional-863000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-863000"
functional_test.go:1565: failed to parse "* The control-plane node functional-863000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-863000\"": parse "* The control-plane node functional-863000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-863000\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 version -o=json --components
functional_test.go:2266: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-863000 version -o=json --components: exit status 83 (40.835083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-863000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-863000"

                                                
                                                
-- /stdout --
functional_test.go:2268: error version: exit status 83
functional_test.go:2273: expected to see "buildctl" in the minikube version --components but got:
* The control-plane node functional-863000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-863000"
functional_test.go:2273: expected to see "commit" in the minikube version --components but got:
* The control-plane node functional-863000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-863000"
functional_test.go:2273: expected to see "containerd" in the minikube version --components but got:
* The control-plane node functional-863000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-863000"
functional_test.go:2273: expected to see "crictl" in the minikube version --components but got:
* The control-plane node functional-863000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-863000"
functional_test.go:2273: expected to see "crio" in the minikube version --components but got:
* The control-plane node functional-863000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-863000"
functional_test.go:2273: expected to see "ctr" in the minikube version --components but got:
* The control-plane node functional-863000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-863000"
functional_test.go:2273: expected to see "docker" in the minikube version --components but got:
* The control-plane node functional-863000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-863000"
functional_test.go:2273: expected to see "minikubeVersion" in the minikube version --components but got:
* The control-plane node functional-863000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-863000"
functional_test.go:2273: expected to see "podman" in the minikube version --components but got:
* The control-plane node functional-863000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-863000"
functional_test.go:2273: expected to see "crun" in the minikube version --components but got:
* The control-plane node functional-863000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-863000"
--- FAIL: TestFunctional/parallel/Version/components (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-863000 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-863000 image ls --format short --alsologtostderr:
I0729 10:37:16.324753    7199 out.go:291] Setting OutFile to fd 1 ...
I0729 10:37:16.324880    7199 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 10:37:16.324883    7199 out.go:304] Setting ErrFile to fd 2...
I0729 10:37:16.324886    7199 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 10:37:16.325003    7199 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
I0729 10:37:16.325436    7199 config.go:182] Loaded profile config "functional-863000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 10:37:16.325499    7199 config.go:182] Loaded profile config "functional-863000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
functional_test.go:274: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-863000 image ls --format table --alsologtostderr:
|-------|-----|----------|------|
| Image | Tag | Image ID | Size |
|-------|-----|----------|------|
|-------|-----|----------|------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-863000 image ls --format table --alsologtostderr:
I0729 10:37:16.538825    7214 out.go:291] Setting OutFile to fd 1 ...
I0729 10:37:16.538947    7214 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 10:37:16.538951    7214 out.go:304] Setting ErrFile to fd 2...
I0729 10:37:16.538954    7214 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 10:37:16.539073    7214 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
I0729 10:37:16.539471    7214 config.go:182] Loaded profile config "functional-863000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 10:37:16.539532    7214 config.go:182] Loaded profile config "functional-863000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
functional_test.go:274: expected | registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-863000 image ls --format json --alsologtostderr:
[]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-863000 image ls --format json --alsologtostderr:
I0729 10:37:16.504852    7212 out.go:291] Setting OutFile to fd 1 ...
I0729 10:37:16.504973    7212 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 10:37:16.504980    7212 out.go:304] Setting ErrFile to fd 2...
I0729 10:37:16.504982    7212 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 10:37:16.505105    7212 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
I0729 10:37:16.505493    7212 config.go:182] Loaded profile config "functional-863000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 10:37:16.505565    7212 config.go:182] Loaded profile config "functional-863000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
functional_test.go:274: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-863000 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-863000 image ls --format yaml --alsologtostderr:
I0729 10:37:16.359809    7201 out.go:291] Setting OutFile to fd 1 ...
I0729 10:37:16.359952    7201 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 10:37:16.359955    7201 out.go:304] Setting ErrFile to fd 2...
I0729 10:37:16.359958    7201 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 10:37:16.360074    7201 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
I0729 10:37:16.360473    7201 config.go:182] Loaded profile config "functional-863000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 10:37:16.360540    7201 config.go:182] Loaded profile config "functional-863000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
functional_test.go:274: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-863000 ssh pgrep buildkitd: exit status 83 (42.040375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-863000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-863000"

                                                
                                                
-- /stdout --
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 image build -t localhost/my-image:functional-863000 testdata/build --alsologtostderr
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-863000 image build -t localhost/my-image:functional-863000 testdata/build --alsologtostderr:
I0729 10:37:16.435838    7205 out.go:291] Setting OutFile to fd 1 ...
I0729 10:37:16.436229    7205 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 10:37:16.436233    7205 out.go:304] Setting ErrFile to fd 2...
I0729 10:37:16.436235    7205 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 10:37:16.436354    7205 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
I0729 10:37:16.436713    7205 config.go:182] Loaded profile config "functional-863000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 10:37:16.437163    7205 config.go:182] Loaded profile config "functional-863000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 10:37:16.437402    7205 build_images.go:133] succeeded building to: 
I0729 10:37:16.437406    7205 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 image ls
functional_test.go:442: expected "localhost/my-image:functional-863000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 image load --daemon docker.io/kicbase/echo-server:functional-863000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 image ls
functional_test.go:442: expected "docker.io/kicbase/echo-server:functional-863000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 image load --daemon docker.io/kicbase/echo-server:functional-863000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 image ls
functional_test.go:442: expected "docker.io/kicbase/echo-server:functional-863000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-863000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 image load --daemon docker.io/kicbase/echo-server:functional-863000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 image ls
functional_test.go:442: expected "docker.io/kicbase/echo-server:functional-863000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 image save docker.io/kicbase/echo-server:functional-863000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:385: expected "/Users/jenkins/workspace/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 image ls
functional_test.go:442: expected "docker.io/kicbase/echo-server:functional-863000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-863000 docker-env) && out/minikube-darwin-arm64 status -p functional-863000"
functional_test.go:495: (dbg) Non-zero exit: /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-863000 docker-env) && out/minikube-darwin-arm64 status -p functional-863000": exit status 1 (42.786875ms)
functional_test.go:501: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/bash (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-863000 update-context --alsologtostderr -v=2: exit status 83 (38.887791ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-863000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-863000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:37:16.572908    7216 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:37:16.573225    7216 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:37:16.573229    7216 out.go:304] Setting ErrFile to fd 2...
	I0729 10:37:16.573231    7216 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:37:16.573370    7216 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 10:37:16.573556    7216 mustload.go:65] Loading cluster: functional-863000
	I0729 10:37:16.573749    7216 config.go:182] Loaded profile config "functional-863000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:37:16.577853    7216 out.go:177] * The control-plane node functional-863000 host is not running: state=Stopped
	I0729 10:37:16.580833    7216 out.go:177]   To start a cluster, run: "minikube start -p functional-863000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-863000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-863000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-863000\"\n", want=*"No changes"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-863000 update-context --alsologtostderr -v=2: exit status 83 (41.612459ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-863000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-863000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:37:16.657621    7220 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:37:16.657752    7220 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:37:16.657755    7220 out.go:304] Setting ErrFile to fd 2...
	I0729 10:37:16.657758    7220 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:37:16.657884    7220 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 10:37:16.658075    7220 mustload.go:65] Loading cluster: functional-863000
	I0729 10:37:16.658252    7220 config.go:182] Loaded profile config "functional-863000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:37:16.662864    7220 out.go:177] * The control-plane node functional-863000 host is not running: state=Stopped
	I0729 10:37:16.666871    7220 out.go:177]   To start a cluster, run: "minikube start -p functional-863000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-863000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-863000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-863000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-863000 update-context --alsologtostderr -v=2: exit status 83 (44.618834ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-863000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-863000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:37:16.611649    7218 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:37:16.611774    7218 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:37:16.611778    7218 out.go:304] Setting ErrFile to fd 2...
	I0729 10:37:16.611780    7218 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:37:16.611894    7218 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 10:37:16.612085    7218 mustload.go:65] Loading cluster: functional-863000
	I0729 10:37:16.612271    7218 config.go:182] Loaded profile config "functional-863000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:37:16.616841    7218 out.go:177] * The control-plane node functional-863000 host is not running: state=Stopped
	I0729 10:37:16.624867    7218 out.go:177]   To start a cluster, run: "minikube start -p functional-863000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-863000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-863000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-863000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:319: (dbg) Non-zero exit: dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A: exit status 9 (15.025416292s)

                                                
                                                
-- stdout --
	
	; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
	; (1 server found)
	;; global options: +cmd
	;; connection timed out; no servers could be reached

                                                
                                                
-- /stdout --
functional_test_tunnel_test.go:322: failed to resolve DNS name: exit status 9
functional_test_tunnel_test.go:329: expected body to contain "ANSWER: 1", but got *"\n; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A\n; (1 server found)\n;; global options: +cmd\n;; connection timed out; no servers could be reached\n"*
functional_test_tunnel_test.go:332: (dbg) Run:  scutil --dns
functional_test_tunnel_test.go:336: debug for DNS configuration:
DNS configuration

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
flags    : Request A records
reach    : 0x00000002 (Reachable)

                                                
                                                
resolver #2
domain   : local
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300000

                                                
                                                
resolver #3
domain   : 254.169.in-addr.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300200

                                                
                                                
resolver #4
domain   : 8.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300400

                                                
                                                
resolver #5
domain   : 9.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300600

                                                
                                                
resolver #6
domain   : a.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300800

                                                
                                                
resolver #7
domain   : b.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 301000

                                                
                                                
resolver #8
domain   : cluster.local
nameserver[0] : 10.96.0.10
flags    : Request A records
reach    : 0x00000002 (Reachable)
order    : 1

                                                
                                                
DNS configuration (for scoped queries)

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
if_index : 14 (en0)
flags    : Scoped, Request A records
reach    : 0x00000002 (Reachable)
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (37.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:419: failed to hit nginx with DNS forwarded "http://nginx-svc.default.svc.cluster.local.": Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:426: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (37.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (9.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-320000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-320000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (9.805756584s)

                                                
                                                
-- stdout --
	* [ha-320000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19339
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "ha-320000" primary control-plane node in "ha-320000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "ha-320000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:39:16.642883    7253 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:39:16.643007    7253 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:39:16.643010    7253 out.go:304] Setting ErrFile to fd 2...
	I0729 10:39:16.643013    7253 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:39:16.643155    7253 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 10:39:16.644209    7253 out.go:298] Setting JSON to false
	I0729 10:39:16.660388    7253 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4125,"bootTime":1722270631,"procs":452,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 10:39:16.660451    7253 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 10:39:16.665439    7253 out.go:177] * [ha-320000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 10:39:16.672400    7253 out.go:177]   - MINIKUBE_LOCATION=19339
	I0729 10:39:16.672462    7253 notify.go:220] Checking for updates...
	I0729 10:39:16.680405    7253 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	I0729 10:39:16.683460    7253 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 10:39:16.687357    7253 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:39:16.690448    7253 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	I0729 10:39:16.693426    7253 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 10:39:16.696473    7253 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:39:16.700365    7253 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 10:39:16.707384    7253 start.go:297] selected driver: qemu2
	I0729 10:39:16.707391    7253 start.go:901] validating driver "qemu2" against <nil>
	I0729 10:39:16.707399    7253 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:39:16.709894    7253 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 10:39:16.713353    7253 out.go:177] * Automatically selected the socket_vmnet network
	I0729 10:39:16.716509    7253 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 10:39:16.716551    7253 cni.go:84] Creating CNI manager for ""
	I0729 10:39:16.716557    7253 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0729 10:39:16.716563    7253 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0729 10:39:16.716595    7253 start.go:340] cluster config:
	{Name:ha-320000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-320000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Soc
ketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:39:16.720308    7253 iso.go:125] acquiring lock: {Name:mk2808e0b9510c77af2c0862d3450f3cc996acba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:39:16.728449    7253 out.go:177] * Starting "ha-320000" primary control-plane node in "ha-320000" cluster
	I0729 10:39:16.732302    7253 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 10:39:16.732323    7253 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 10:39:16.732334    7253 cache.go:56] Caching tarball of preloaded images
	I0729 10:39:16.732399    7253 preload.go:172] Found /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 10:39:16.732406    7253 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 10:39:16.732645    7253 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/ha-320000/config.json ...
	I0729 10:39:16.732658    7253 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/ha-320000/config.json: {Name:mk42799cbd12189f68d10d65b79d9a18fc7a9202 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:39:16.733060    7253 start.go:360] acquireMachinesLock for ha-320000: {Name:mk9e50f97d2386769ca02400541d9347efa17b63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:39:16.733096    7253 start.go:364] duration metric: took 29.75µs to acquireMachinesLock for "ha-320000"
	I0729 10:39:16.733108    7253 start.go:93] Provisioning new machine with config: &{Name:ha-320000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.30.3 ClusterName:ha-320000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:39:16.733139    7253 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 10:39:16.741413    7253 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 10:39:16.759593    7253 start.go:159] libmachine.API.Create for "ha-320000" (driver="qemu2")
	I0729 10:39:16.759621    7253 client.go:168] LocalClient.Create starting
	I0729 10:39:16.759690    7253 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/ca.pem
	I0729 10:39:16.759724    7253 main.go:141] libmachine: Decoding PEM data...
	I0729 10:39:16.759734    7253 main.go:141] libmachine: Parsing certificate...
	I0729 10:39:16.759773    7253 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/cert.pem
	I0729 10:39:16.759797    7253 main.go:141] libmachine: Decoding PEM data...
	I0729 10:39:16.759812    7253 main.go:141] libmachine: Parsing certificate...
	I0729 10:39:16.760235    7253 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19339-6071/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0729 10:39:16.908939    7253 main.go:141] libmachine: Creating SSH key...
	I0729 10:39:17.016275    7253 main.go:141] libmachine: Creating Disk image...
	I0729 10:39:17.016280    7253 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 10:39:17.016483    7253 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/ha-320000/disk.qcow2.raw /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/ha-320000/disk.qcow2
	I0729 10:39:17.025683    7253 main.go:141] libmachine: STDOUT: 
	I0729 10:39:17.025700    7253 main.go:141] libmachine: STDERR: 
	I0729 10:39:17.025740    7253 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/ha-320000/disk.qcow2 +20000M
	I0729 10:39:17.033522    7253 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 10:39:17.033535    7253 main.go:141] libmachine: STDERR: 
	I0729 10:39:17.033552    7253 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/ha-320000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/ha-320000/disk.qcow2
	I0729 10:39:17.033557    7253 main.go:141] libmachine: Starting QEMU VM...
	I0729 10:39:17.033565    7253 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:39:17.033599    7253 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/ha-320000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/ha-320000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/ha-320000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:a0:4c:01:a4:a4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/ha-320000/disk.qcow2
	I0729 10:39:17.035194    7253 main.go:141] libmachine: STDOUT: 
	I0729 10:39:17.035210    7253 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:39:17.035228    7253 client.go:171] duration metric: took 275.601292ms to LocalClient.Create
	I0729 10:39:19.037401    7253 start.go:128] duration metric: took 2.304242375s to createHost
	I0729 10:39:19.037541    7253 start.go:83] releasing machines lock for "ha-320000", held for 2.304428s
	W0729 10:39:19.037609    7253 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:39:19.047674    7253 out.go:177] * Deleting "ha-320000" in qemu2 ...
	W0729 10:39:19.077968    7253 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:39:19.077994    7253 start.go:729] Will try again in 5 seconds ...
	I0729 10:39:24.080263    7253 start.go:360] acquireMachinesLock for ha-320000: {Name:mk9e50f97d2386769ca02400541d9347efa17b63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:39:24.080717    7253 start.go:364] duration metric: took 355.167µs to acquireMachinesLock for "ha-320000"
	I0729 10:39:24.080837    7253 start.go:93] Provisioning new machine with config: &{Name:ha-320000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.30.3 ClusterName:ha-320000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:39:24.081174    7253 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 10:39:24.095705    7253 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 10:39:24.146504    7253 start.go:159] libmachine.API.Create for "ha-320000" (driver="qemu2")
	I0729 10:39:24.146552    7253 client.go:168] LocalClient.Create starting
	I0729 10:39:24.146667    7253 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/ca.pem
	I0729 10:39:24.146735    7253 main.go:141] libmachine: Decoding PEM data...
	I0729 10:39:24.146751    7253 main.go:141] libmachine: Parsing certificate...
	I0729 10:39:24.146820    7253 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/cert.pem
	I0729 10:39:24.146872    7253 main.go:141] libmachine: Decoding PEM data...
	I0729 10:39:24.146886    7253 main.go:141] libmachine: Parsing certificate...
	I0729 10:39:24.147449    7253 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19339-6071/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0729 10:39:24.307195    7253 main.go:141] libmachine: Creating SSH key...
	I0729 10:39:24.352743    7253 main.go:141] libmachine: Creating Disk image...
	I0729 10:39:24.352748    7253 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 10:39:24.352953    7253 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/ha-320000/disk.qcow2.raw /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/ha-320000/disk.qcow2
	I0729 10:39:24.362111    7253 main.go:141] libmachine: STDOUT: 
	I0729 10:39:24.362128    7253 main.go:141] libmachine: STDERR: 
	I0729 10:39:24.362173    7253 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/ha-320000/disk.qcow2 +20000M
	I0729 10:39:24.369921    7253 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 10:39:24.369934    7253 main.go:141] libmachine: STDERR: 
	I0729 10:39:24.369952    7253 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/ha-320000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/ha-320000/disk.qcow2
	I0729 10:39:24.369955    7253 main.go:141] libmachine: Starting QEMU VM...
	I0729 10:39:24.369962    7253 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:39:24.369991    7253 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/ha-320000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/ha-320000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/ha-320000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:0a:e0:7c:97:e9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/ha-320000/disk.qcow2
	I0729 10:39:24.371637    7253 main.go:141] libmachine: STDOUT: 
	I0729 10:39:24.371650    7253 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:39:24.371670    7253 client.go:171] duration metric: took 225.104625ms to LocalClient.Create
	I0729 10:39:26.373845    7253 start.go:128] duration metric: took 2.292640458s to createHost
	I0729 10:39:26.373938    7253 start.go:83] releasing machines lock for "ha-320000", held for 2.293172458s
	W0729 10:39:26.374252    7253 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-320000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-320000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:39:26.388844    7253 out.go:177] 
	W0729 10:39:26.393013    7253 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 10:39:26.393053    7253 out.go:239] * 
	* 
	W0729 10:39:26.395554    7253 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 10:39:26.405778    7253 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 start -p ha-320000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-320000 -n ha-320000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-320000 -n ha-320000: exit status 7 (68.095125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-320000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StartCluster (9.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (115.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-320000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-320000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (58.495709ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-320000" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-320000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-320000 -- rollout status deployment/busybox: exit status 1 (56.074459ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-320000"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-320000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-320000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (55.595459ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-320000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-320000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-320000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.979833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-320000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-320000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-320000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.599708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-320000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-320000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-320000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.018958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-320000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-320000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-320000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.586167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-320000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-320000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-320000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.9585ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-320000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-320000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-320000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.687709ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-320000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-320000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-320000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.66825ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-320000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-320000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-320000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.851542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-320000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-320000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-320000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.227167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-320000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-320000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-320000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.307459ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-320000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-320000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-320000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.141333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-320000"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-320000 -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-320000 -- exec  -- nslookup kubernetes.io: exit status 1 (55.836834ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-320000"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-320000 -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-320000 -- exec  -- nslookup kubernetes.default: exit status 1 (56.271542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-320000"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-320000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-320000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (56.333959ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-320000"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-320000 -n ha-320000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-320000 -n ha-320000: exit status 7 (30.083166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-320000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeployApp (115.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-320000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-320000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.057542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-320000"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-320000 -n ha-320000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-320000 -n ha-320000: exit status 7 (30.012167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-320000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-320000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-320000 -v=7 --alsologtostderr: exit status 83 (42.605208ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-320000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-320000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:41:21.643509    7342 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:41:21.643876    7342 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:41:21.643879    7342 out.go:304] Setting ErrFile to fd 2...
	I0729 10:41:21.643882    7342 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:41:21.644036    7342 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 10:41:21.644267    7342 mustload.go:65] Loading cluster: ha-320000
	I0729 10:41:21.644455    7342 config.go:182] Loaded profile config "ha-320000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:41:21.649767    7342 out.go:177] * The control-plane node ha-320000 host is not running: state=Stopped
	I0729 10:41:21.654772    7342 out.go:177]   To start a cluster, run: "minikube start -p ha-320000"

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-320000 -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-320000 -n ha-320000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-320000 -n ha-320000: exit status 7 (30.155ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-320000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-320000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-320000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.232125ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-320000

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-320000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-320000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-320000 -n ha-320000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-320000 -n ha-320000: exit status 7 (29.797ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-320000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-320000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-320000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-320000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-320000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-320000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-320000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-320000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-320000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-320000 -n ha-320000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-320000 -n ha-320000: exit status 7 (29.795042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-320000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-320000 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-320000 status --output json -v=7 --alsologtostderr: exit status 7 (29.682833ms)

                                                
                                                
-- stdout --
	{"Name":"ha-320000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:41:21.849232    7354 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:41:21.849405    7354 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:41:21.849414    7354 out.go:304] Setting ErrFile to fd 2...
	I0729 10:41:21.849421    7354 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:41:21.849558    7354 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 10:41:21.849688    7354 out.go:298] Setting JSON to true
	I0729 10:41:21.849704    7354 mustload.go:65] Loading cluster: ha-320000
	I0729 10:41:21.849749    7354 notify.go:220] Checking for updates...
	I0729 10:41:21.849904    7354 config.go:182] Loaded profile config "ha-320000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:41:21.849910    7354 status.go:255] checking status of ha-320000 ...
	I0729 10:41:21.850105    7354 status.go:330] ha-320000 host status = "Stopped" (err=<nil>)
	I0729 10:41:21.850109    7354 status.go:343] host is not running, skipping remaining checks
	I0729 10:41:21.850111    7354 status.go:257] ha-320000 status: &{Name:ha-320000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:333: failed to decode json from status: args "out/minikube-darwin-arm64 -p ha-320000 status --output json -v=7 --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-320000 -n ha-320000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-320000 -n ha-320000: exit status 7 (29.825542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-320000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-320000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-320000 node stop m02 -v=7 --alsologtostderr: exit status 85 (47.497042ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:41:21.909539    7358 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:41:21.910114    7358 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:41:21.910126    7358 out.go:304] Setting ErrFile to fd 2...
	I0729 10:41:21.910130    7358 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:41:21.910300    7358 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 10:41:21.910535    7358 mustload.go:65] Loading cluster: ha-320000
	I0729 10:41:21.910759    7358 config.go:182] Loaded profile config "ha-320000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:41:21.915085    7358 out.go:177] 
	W0729 10:41:21.918132    7358 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0729 10:41:21.918137    7358 out.go:239] * 
	* 
	W0729 10:41:21.920010    7358 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 10:41:21.924067    7358 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-darwin-arm64 -p ha-320000 node stop m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-320000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-320000 status -v=7 --alsologtostderr: exit status 7 (30.145291ms)

                                                
                                                
-- stdout --
	ha-320000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:41:21.957507    7360 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:41:21.957682    7360 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:41:21.957685    7360 out.go:304] Setting ErrFile to fd 2...
	I0729 10:41:21.957687    7360 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:41:21.957814    7360 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 10:41:21.957926    7360 out.go:298] Setting JSON to false
	I0729 10:41:21.957936    7360 mustload.go:65] Loading cluster: ha-320000
	I0729 10:41:21.958000    7360 notify.go:220] Checking for updates...
	I0729 10:41:21.958123    7360 config.go:182] Loaded profile config "ha-320000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:41:21.958129    7360 status.go:255] checking status of ha-320000 ...
	I0729 10:41:21.958338    7360 status.go:330] ha-320000 host status = "Stopped" (err=<nil>)
	I0729 10:41:21.958341    7360 status.go:343] host is not running, skipping remaining checks
	I0729 10:41:21.958343    7360 status.go:257] ha-320000 status: &{Name:ha-320000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:375: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-320000 status -v=7 --alsologtostderr": ha-320000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-320000 status -v=7 --alsologtostderr": ha-320000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-320000 status -v=7 --alsologtostderr": ha-320000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-320000 status -v=7 --alsologtostderr": ha-320000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-320000 -n ha-320000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-320000 -n ha-320000: exit status 7 (29.977292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-320000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-320000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-320000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-320000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-320000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-320000 -n ha-320000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-320000 -n ha-320000: exit status 7 (28.839459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-320000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (48.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-320000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-320000 node start m02 -v=7 --alsologtostderr: exit status 85 (45.731333ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:41:22.093972    7369 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:41:22.094991    7369 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:41:22.094994    7369 out.go:304] Setting ErrFile to fd 2...
	I0729 10:41:22.094997    7369 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:41:22.095166    7369 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 10:41:22.095392    7369 mustload.go:65] Loading cluster: ha-320000
	I0729 10:41:22.095577    7369 config.go:182] Loaded profile config "ha-320000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:41:22.099257    7369 out.go:177] 
	W0729 10:41:22.103072    7369 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0729 10:41:22.103077    7369 out.go:239] * 
	* 
	W0729 10:41:22.104957    7369 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 10:41:22.108069    7369 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:422: I0729 10:41:22.093972    7369 out.go:291] Setting OutFile to fd 1 ...
I0729 10:41:22.094991    7369 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 10:41:22.094994    7369 out.go:304] Setting ErrFile to fd 2...
I0729 10:41:22.094997    7369 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 10:41:22.095166    7369 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
I0729 10:41:22.095392    7369 mustload.go:65] Loading cluster: ha-320000
I0729 10:41:22.095577    7369 config.go:182] Loaded profile config "ha-320000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 10:41:22.099257    7369 out.go:177] 
W0729 10:41:22.103072    7369 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W0729 10:41:22.103077    7369 out.go:239] * 
* 
W0729 10:41:22.104957    7369 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0729 10:41:22.108069    7369 out.go:177] 
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-320000 node start m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-320000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-320000 status -v=7 --alsologtostderr: exit status 7 (29.83275ms)

                                                
                                                
-- stdout --
	ha-320000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:41:22.140221    7371 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:41:22.140375    7371 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:41:22.140378    7371 out.go:304] Setting ErrFile to fd 2...
	I0729 10:41:22.140381    7371 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:41:22.140508    7371 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 10:41:22.140616    7371 out.go:298] Setting JSON to false
	I0729 10:41:22.140626    7371 mustload.go:65] Loading cluster: ha-320000
	I0729 10:41:22.140683    7371 notify.go:220] Checking for updates...
	I0729 10:41:22.140807    7371 config.go:182] Loaded profile config "ha-320000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:41:22.140813    7371 status.go:255] checking status of ha-320000 ...
	I0729 10:41:22.141017    7371 status.go:330] ha-320000 host status = "Stopped" (err=<nil>)
	I0729 10:41:22.141021    7371 status.go:343] host is not running, skipping remaining checks
	I0729 10:41:22.141023    7371 status.go:257] ha-320000 status: &{Name:ha-320000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-320000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-320000 status -v=7 --alsologtostderr: exit status 7 (72.182042ms)

                                                
                                                
-- stdout --
	ha-320000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:41:23.393536    7373 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:41:23.393780    7373 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:41:23.393785    7373 out.go:304] Setting ErrFile to fd 2...
	I0729 10:41:23.393788    7373 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:41:23.393979    7373 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 10:41:23.394155    7373 out.go:298] Setting JSON to false
	I0729 10:41:23.394173    7373 mustload.go:65] Loading cluster: ha-320000
	I0729 10:41:23.394228    7373 notify.go:220] Checking for updates...
	I0729 10:41:23.394449    7373 config.go:182] Loaded profile config "ha-320000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:41:23.394458    7373 status.go:255] checking status of ha-320000 ...
	I0729 10:41:23.394732    7373 status.go:330] ha-320000 host status = "Stopped" (err=<nil>)
	I0729 10:41:23.394737    7373 status.go:343] host is not running, skipping remaining checks
	I0729 10:41:23.394739    7373 status.go:257] ha-320000 status: &{Name:ha-320000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-320000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-320000 status -v=7 --alsologtostderr: exit status 7 (71.693791ms)

                                                
                                                
-- stdout --
	ha-320000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:41:25.092359    7375 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:41:25.092583    7375 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:41:25.092587    7375 out.go:304] Setting ErrFile to fd 2...
	I0729 10:41:25.092590    7375 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:41:25.092772    7375 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 10:41:25.092951    7375 out.go:298] Setting JSON to false
	I0729 10:41:25.092964    7375 mustload.go:65] Loading cluster: ha-320000
	I0729 10:41:25.093000    7375 notify.go:220] Checking for updates...
	I0729 10:41:25.093203    7375 config.go:182] Loaded profile config "ha-320000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:41:25.093211    7375 status.go:255] checking status of ha-320000 ...
	I0729 10:41:25.093497    7375 status.go:330] ha-320000 host status = "Stopped" (err=<nil>)
	I0729 10:41:25.093502    7375 status.go:343] host is not running, skipping remaining checks
	I0729 10:41:25.093505    7375 status.go:257] ha-320000 status: &{Name:ha-320000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-320000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-320000 status -v=7 --alsologtostderr: exit status 7 (72.132583ms)

                                                
                                                
-- stdout --
	ha-320000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:41:27.750152    7377 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:41:27.750366    7377 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:41:27.750370    7377 out.go:304] Setting ErrFile to fd 2...
	I0729 10:41:27.750373    7377 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:41:27.750528    7377 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 10:41:27.750674    7377 out.go:298] Setting JSON to false
	I0729 10:41:27.750687    7377 mustload.go:65] Loading cluster: ha-320000
	I0729 10:41:27.750733    7377 notify.go:220] Checking for updates...
	I0729 10:41:27.750925    7377 config.go:182] Loaded profile config "ha-320000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:41:27.750933    7377 status.go:255] checking status of ha-320000 ...
	I0729 10:41:27.751205    7377 status.go:330] ha-320000 host status = "Stopped" (err=<nil>)
	I0729 10:41:27.751210    7377 status.go:343] host is not running, skipping remaining checks
	I0729 10:41:27.751213    7377 status.go:257] ha-320000 status: &{Name:ha-320000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-320000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-320000 status -v=7 --alsologtostderr: exit status 7 (74.314041ms)

                                                
                                                
-- stdout --
	ha-320000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:41:30.116771    7381 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:41:30.116978    7381 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:41:30.116991    7381 out.go:304] Setting ErrFile to fd 2...
	I0729 10:41:30.116995    7381 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:41:30.117216    7381 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 10:41:30.117402    7381 out.go:298] Setting JSON to false
	I0729 10:41:30.117418    7381 mustload.go:65] Loading cluster: ha-320000
	I0729 10:41:30.117455    7381 notify.go:220] Checking for updates...
	I0729 10:41:30.117712    7381 config.go:182] Loaded profile config "ha-320000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:41:30.117720    7381 status.go:255] checking status of ha-320000 ...
	I0729 10:41:30.117978    7381 status.go:330] ha-320000 host status = "Stopped" (err=<nil>)
	I0729 10:41:30.117982    7381 status.go:343] host is not running, skipping remaining checks
	I0729 10:41:30.117986    7381 status.go:257] ha-320000 status: &{Name:ha-320000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-320000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-320000 status -v=7 --alsologtostderr: exit status 7 (72.758291ms)

                                                
                                                
-- stdout --
	ha-320000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:41:32.733463    7383 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:41:32.733665    7383 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:41:32.733669    7383 out.go:304] Setting ErrFile to fd 2...
	I0729 10:41:32.733672    7383 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:41:32.733844    7383 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 10:41:32.733981    7383 out.go:298] Setting JSON to false
	I0729 10:41:32.733993    7383 mustload.go:65] Loading cluster: ha-320000
	I0729 10:41:32.734029    7383 notify.go:220] Checking for updates...
	I0729 10:41:32.734244    7383 config.go:182] Loaded profile config "ha-320000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:41:32.734253    7383 status.go:255] checking status of ha-320000 ...
	I0729 10:41:32.734524    7383 status.go:330] ha-320000 host status = "Stopped" (err=<nil>)
	I0729 10:41:32.734529    7383 status.go:343] host is not running, skipping remaining checks
	I0729 10:41:32.734532    7383 status.go:257] ha-320000 status: &{Name:ha-320000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-320000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-320000 status -v=7 --alsologtostderr: exit status 7 (72.597375ms)

                                                
                                                
-- stdout --
	ha-320000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:41:43.563494    7385 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:41:43.563705    7385 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:41:43.563709    7385 out.go:304] Setting ErrFile to fd 2...
	I0729 10:41:43.563712    7385 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:41:43.563880    7385 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 10:41:43.564036    7385 out.go:298] Setting JSON to false
	I0729 10:41:43.564047    7385 mustload.go:65] Loading cluster: ha-320000
	I0729 10:41:43.564082    7385 notify.go:220] Checking for updates...
	I0729 10:41:43.564304    7385 config.go:182] Loaded profile config "ha-320000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:41:43.564312    7385 status.go:255] checking status of ha-320000 ...
	I0729 10:41:43.564574    7385 status.go:330] ha-320000 host status = "Stopped" (err=<nil>)
	I0729 10:41:43.564579    7385 status.go:343] host is not running, skipping remaining checks
	I0729 10:41:43.564585    7385 status.go:257] ha-320000 status: &{Name:ha-320000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-320000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-320000 status -v=7 --alsologtostderr: exit status 7 (76.769083ms)

                                                
                                                
-- stdout --
	ha-320000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:41:55.017244    7387 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:41:55.017424    7387 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:41:55.017428    7387 out.go:304] Setting ErrFile to fd 2...
	I0729 10:41:55.017431    7387 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:41:55.017580    7387 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 10:41:55.017744    7387 out.go:298] Setting JSON to false
	I0729 10:41:55.017756    7387 mustload.go:65] Loading cluster: ha-320000
	I0729 10:41:55.017793    7387 notify.go:220] Checking for updates...
	I0729 10:41:55.017994    7387 config.go:182] Loaded profile config "ha-320000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:41:55.018002    7387 status.go:255] checking status of ha-320000 ...
	I0729 10:41:55.018270    7387 status.go:330] ha-320000 host status = "Stopped" (err=<nil>)
	I0729 10:41:55.018275    7387 status.go:343] host is not running, skipping remaining checks
	I0729 10:41:55.018278    7387 status.go:257] ha-320000 status: &{Name:ha-320000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-320000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-320000 status -v=7 --alsologtostderr: exit status 7 (74.321791ms)

                                                
                                                
-- stdout --
	ha-320000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:42:10.426327    7389 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:42:10.426524    7389 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:42:10.426529    7389 out.go:304] Setting ErrFile to fd 2...
	I0729 10:42:10.426532    7389 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:42:10.426747    7389 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 10:42:10.426909    7389 out.go:298] Setting JSON to false
	I0729 10:42:10.426924    7389 mustload.go:65] Loading cluster: ha-320000
	I0729 10:42:10.426955    7389 notify.go:220] Checking for updates...
	I0729 10:42:10.427188    7389 config.go:182] Loaded profile config "ha-320000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:42:10.427196    7389 status.go:255] checking status of ha-320000 ...
	I0729 10:42:10.427461    7389 status.go:330] ha-320000 host status = "Stopped" (err=<nil>)
	I0729 10:42:10.427466    7389 status.go:343] host is not running, skipping remaining checks
	I0729 10:42:10.427469    7389 status.go:257] ha-320000 status: &{Name:ha-320000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-320000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-320000 -n ha-320000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-320000 -n ha-320000: exit status 7 (33.481542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-320000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (48.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-320000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-320000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-320000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-320000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-320000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-320000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-320000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-320000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-320000 -n ha-320000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-320000 -n ha-320000: exit status 7 (29.635333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-320000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (7.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-320000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-320000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-320000 -v=7 --alsologtostderr: (2.0007715s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-320000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-320000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.230588791s)

                                                
                                                
-- stdout --
	* [ha-320000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19339
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-320000" primary control-plane node in "ha-320000" cluster
	* Restarting existing qemu2 VM for "ha-320000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-320000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:42:12.639385    7410 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:42:12.639568    7410 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:42:12.639573    7410 out.go:304] Setting ErrFile to fd 2...
	I0729 10:42:12.639577    7410 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:42:12.639770    7410 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 10:42:12.641074    7410 out.go:298] Setting JSON to false
	I0729 10:42:12.661472    7410 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4301,"bootTime":1722270631,"procs":449,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 10:42:12.661558    7410 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 10:42:12.667062    7410 out.go:177] * [ha-320000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 10:42:12.674004    7410 out.go:177]   - MINIKUBE_LOCATION=19339
	I0729 10:42:12.674042    7410 notify.go:220] Checking for updates...
	I0729 10:42:12.681834    7410 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	I0729 10:42:12.684984    7410 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 10:42:12.688034    7410 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:42:12.691006    7410 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	I0729 10:42:12.694013    7410 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 10:42:12.697357    7410 config.go:182] Loaded profile config "ha-320000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:42:12.697429    7410 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:42:12.702023    7410 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 10:42:12.709004    7410 start.go:297] selected driver: qemu2
	I0729 10:42:12.709011    7410 start.go:901] validating driver "qemu2" against &{Name:ha-320000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.3 ClusterName:ha-320000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:42:12.709086    7410 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:42:12.711610    7410 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 10:42:12.711654    7410 cni.go:84] Creating CNI manager for ""
	I0729 10:42:12.711659    7410 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0729 10:42:12.711712    7410 start.go:340] cluster config:
	{Name:ha-320000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-320000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:42:12.715458    7410 iso.go:125] acquiring lock: {Name:mk2808e0b9510c77af2c0862d3450f3cc996acba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:42:12.723935    7410 out.go:177] * Starting "ha-320000" primary control-plane node in "ha-320000" cluster
	I0729 10:42:12.728135    7410 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 10:42:12.728150    7410 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 10:42:12.728167    7410 cache.go:56] Caching tarball of preloaded images
	I0729 10:42:12.728227    7410 preload.go:172] Found /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 10:42:12.728233    7410 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 10:42:12.728281    7410 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/ha-320000/config.json ...
	I0729 10:42:12.728750    7410 start.go:360] acquireMachinesLock for ha-320000: {Name:mk9e50f97d2386769ca02400541d9347efa17b63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:42:12.728787    7410 start.go:364] duration metric: took 30.458µs to acquireMachinesLock for "ha-320000"
	I0729 10:42:12.728797    7410 start.go:96] Skipping create...Using existing machine configuration
	I0729 10:42:12.728803    7410 fix.go:54] fixHost starting: 
	I0729 10:42:12.728936    7410 fix.go:112] recreateIfNeeded on ha-320000: state=Stopped err=<nil>
	W0729 10:42:12.728946    7410 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 10:42:12.737005    7410 out.go:177] * Restarting existing qemu2 VM for "ha-320000" ...
	I0729 10:42:12.740989    7410 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:42:12.741018    7410 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/ha-320000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/ha-320000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/ha-320000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:0a:e0:7c:97:e9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/ha-320000/disk.qcow2
	I0729 10:42:12.743122    7410 main.go:141] libmachine: STDOUT: 
	I0729 10:42:12.743142    7410 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:42:12.743180    7410 fix.go:56] duration metric: took 14.376958ms for fixHost
	I0729 10:42:12.743185    7410 start.go:83] releasing machines lock for "ha-320000", held for 14.394208ms
	W0729 10:42:12.743192    7410 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 10:42:12.743227    7410 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:42:12.743233    7410 start.go:729] Will try again in 5 seconds ...
	I0729 10:42:17.745374    7410 start.go:360] acquireMachinesLock for ha-320000: {Name:mk9e50f97d2386769ca02400541d9347efa17b63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:42:17.745768    7410 start.go:364] duration metric: took 280.083µs to acquireMachinesLock for "ha-320000"
	I0729 10:42:17.745901    7410 start.go:96] Skipping create...Using existing machine configuration
	I0729 10:42:17.745918    7410 fix.go:54] fixHost starting: 
	I0729 10:42:17.746671    7410 fix.go:112] recreateIfNeeded on ha-320000: state=Stopped err=<nil>
	W0729 10:42:17.746702    7410 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 10:42:17.752207    7410 out.go:177] * Restarting existing qemu2 VM for "ha-320000" ...
	I0729 10:42:17.759117    7410 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:42:17.759391    7410 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/ha-320000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/ha-320000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/ha-320000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:0a:e0:7c:97:e9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/ha-320000/disk.qcow2
	I0729 10:42:17.768350    7410 main.go:141] libmachine: STDOUT: 
	I0729 10:42:17.768424    7410 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:42:17.768506    7410 fix.go:56] duration metric: took 22.585709ms for fixHost
	I0729 10:42:17.768524    7410 start.go:83] releasing machines lock for "ha-320000", held for 22.732292ms
	W0729 10:42:17.768728    7410 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-320000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-320000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:42:17.776223    7410 out.go:177] 
	W0729 10:42:17.780205    7410 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 10:42:17.780228    7410 out.go:239] * 
	* 
	W0729 10:42:17.782859    7410 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 10:42:17.790181    7410 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-320000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-320000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-320000 -n ha-320000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-320000 -n ha-320000: exit status 7 (33.256209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-320000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (7.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-320000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-320000 node delete m03 -v=7 --alsologtostderr: exit status 83 (40.201708ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-320000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-320000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:42:17.933380    7426 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:42:17.933956    7426 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:42:17.933964    7426 out.go:304] Setting ErrFile to fd 2...
	I0729 10:42:17.933966    7426 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:42:17.934130    7426 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 10:42:17.934322    7426 mustload.go:65] Loading cluster: ha-320000
	I0729 10:42:17.934512    7426 config.go:182] Loaded profile config "ha-320000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:42:17.938907    7426 out.go:177] * The control-plane node ha-320000 host is not running: state=Stopped
	I0729 10:42:17.941905    7426 out.go:177]   To start a cluster, run: "minikube start -p ha-320000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-320000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-320000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-320000 status -v=7 --alsologtostderr: exit status 7 (29.328792ms)

                                                
                                                
-- stdout --
	ha-320000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:42:17.973379    7428 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:42:17.973539    7428 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:42:17.973543    7428 out.go:304] Setting ErrFile to fd 2...
	I0729 10:42:17.973545    7428 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:42:17.973696    7428 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 10:42:17.973833    7428 out.go:298] Setting JSON to false
	I0729 10:42:17.973842    7428 mustload.go:65] Loading cluster: ha-320000
	I0729 10:42:17.973909    7428 notify.go:220] Checking for updates...
	I0729 10:42:17.974042    7428 config.go:182] Loaded profile config "ha-320000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:42:17.974048    7428 status.go:255] checking status of ha-320000 ...
	I0729 10:42:17.974252    7428 status.go:330] ha-320000 host status = "Stopped" (err=<nil>)
	I0729 10:42:17.974255    7428 status.go:343] host is not running, skipping remaining checks
	I0729 10:42:17.974258    7428 status.go:257] ha-320000 status: &{Name:ha-320000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-320000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-320000 -n ha-320000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-320000 -n ha-320000: exit status 7 (29.584083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-320000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-320000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-320000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-320000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-320000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-320000 -n ha-320000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-320000 -n ha-320000: exit status 7 (29.445208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-320000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (3.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-320000 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-320000 stop -v=7 --alsologtostderr: (3.82630825s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-320000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-320000 status -v=7 --alsologtostderr: exit status 7 (67.119125ms)

                                                
                                                
-- stdout --
	ha-320000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:42:21.971705    7458 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:42:21.971935    7458 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:42:21.971939    7458 out.go:304] Setting ErrFile to fd 2...
	I0729 10:42:21.971942    7458 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:42:21.972121    7458 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 10:42:21.972324    7458 out.go:298] Setting JSON to false
	I0729 10:42:21.972345    7458 mustload.go:65] Loading cluster: ha-320000
	I0729 10:42:21.972378    7458 notify.go:220] Checking for updates...
	I0729 10:42:21.972644    7458 config.go:182] Loaded profile config "ha-320000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:42:21.972652    7458 status.go:255] checking status of ha-320000 ...
	I0729 10:42:21.972941    7458 status.go:330] ha-320000 host status = "Stopped" (err=<nil>)
	I0729 10:42:21.972946    7458 status.go:343] host is not running, skipping remaining checks
	I0729 10:42:21.972949    7458 status.go:257] ha-320000 status: &{Name:ha-320000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-320000 status -v=7 --alsologtostderr": ha-320000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-320000 status -v=7 --alsologtostderr": ha-320000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-320000 status -v=7 --alsologtostderr": ha-320000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-320000 -n ha-320000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-320000 -n ha-320000: exit status 7 (32.65375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-320000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (3.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-320000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-320000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.185337709s)

                                                
                                                
-- stdout --
	* [ha-320000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19339
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-320000" primary control-plane node in "ha-320000" cluster
	* Restarting existing qemu2 VM for "ha-320000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-320000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:42:22.034099    7462 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:42:22.034220    7462 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:42:22.034223    7462 out.go:304] Setting ErrFile to fd 2...
	I0729 10:42:22.034226    7462 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:42:22.034356    7462 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 10:42:22.035392    7462 out.go:298] Setting JSON to false
	I0729 10:42:22.051710    7462 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4311,"bootTime":1722270631,"procs":451,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 10:42:22.051778    7462 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 10:42:22.056969    7462 out.go:177] * [ha-320000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 10:42:22.063921    7462 out.go:177]   - MINIKUBE_LOCATION=19339
	I0729 10:42:22.063969    7462 notify.go:220] Checking for updates...
	I0729 10:42:22.070871    7462 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	I0729 10:42:22.073936    7462 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 10:42:22.076899    7462 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:42:22.079909    7462 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	I0729 10:42:22.082886    7462 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 10:42:22.086154    7462 config.go:182] Loaded profile config "ha-320000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:42:22.086438    7462 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:42:22.090882    7462 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 10:42:22.097903    7462 start.go:297] selected driver: qemu2
	I0729 10:42:22.097910    7462 start.go:901] validating driver "qemu2" against &{Name:ha-320000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.3 ClusterName:ha-320000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:42:22.098001    7462 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:42:22.100311    7462 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 10:42:22.100339    7462 cni.go:84] Creating CNI manager for ""
	I0729 10:42:22.100345    7462 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0729 10:42:22.100387    7462 start.go:340] cluster config:
	{Name:ha-320000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-320000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:42:22.103875    7462 iso.go:125] acquiring lock: {Name:mk2808e0b9510c77af2c0862d3450f3cc996acba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:42:22.112894    7462 out.go:177] * Starting "ha-320000" primary control-plane node in "ha-320000" cluster
	I0729 10:42:22.116856    7462 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 10:42:22.116871    7462 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 10:42:22.116883    7462 cache.go:56] Caching tarball of preloaded images
	I0729 10:42:22.116939    7462 preload.go:172] Found /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 10:42:22.116944    7462 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 10:42:22.117009    7462 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/ha-320000/config.json ...
	I0729 10:42:22.117454    7462 start.go:360] acquireMachinesLock for ha-320000: {Name:mk9e50f97d2386769ca02400541d9347efa17b63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:42:22.117489    7462 start.go:364] duration metric: took 29.625µs to acquireMachinesLock for "ha-320000"
	I0729 10:42:22.117500    7462 start.go:96] Skipping create...Using existing machine configuration
	I0729 10:42:22.117506    7462 fix.go:54] fixHost starting: 
	I0729 10:42:22.117623    7462 fix.go:112] recreateIfNeeded on ha-320000: state=Stopped err=<nil>
	W0729 10:42:22.117631    7462 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 10:42:22.124927    7462 out.go:177] * Restarting existing qemu2 VM for "ha-320000" ...
	I0729 10:42:22.128930    7462 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:42:22.128976    7462 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/ha-320000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/ha-320000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/ha-320000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:0a:e0:7c:97:e9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/ha-320000/disk.qcow2
	I0729 10:42:22.131041    7462 main.go:141] libmachine: STDOUT: 
	I0729 10:42:22.131060    7462 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:42:22.131086    7462 fix.go:56] duration metric: took 13.580667ms for fixHost
	I0729 10:42:22.131090    7462 start.go:83] releasing machines lock for "ha-320000", held for 13.596666ms
	W0729 10:42:22.131096    7462 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 10:42:22.131132    7462 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:42:22.131137    7462 start.go:729] Will try again in 5 seconds ...
	I0729 10:42:27.133239    7462 start.go:360] acquireMachinesLock for ha-320000: {Name:mk9e50f97d2386769ca02400541d9347efa17b63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:42:27.133627    7462 start.go:364] duration metric: took 299.541µs to acquireMachinesLock for "ha-320000"
	I0729 10:42:27.133751    7462 start.go:96] Skipping create...Using existing machine configuration
	I0729 10:42:27.133770    7462 fix.go:54] fixHost starting: 
	I0729 10:42:27.134440    7462 fix.go:112] recreateIfNeeded on ha-320000: state=Stopped err=<nil>
	W0729 10:42:27.134466    7462 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 10:42:27.138856    7462 out.go:177] * Restarting existing qemu2 VM for "ha-320000" ...
	I0729 10:42:27.146757    7462 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:42:27.147039    7462 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/ha-320000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/ha-320000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/ha-320000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:0a:e0:7c:97:e9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/ha-320000/disk.qcow2
	I0729 10:42:27.155859    7462 main.go:141] libmachine: STDOUT: 
	I0729 10:42:27.155911    7462 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:42:27.155981    7462 fix.go:56] duration metric: took 22.212541ms for fixHost
	I0729 10:42:27.155998    7462 start.go:83] releasing machines lock for "ha-320000", held for 22.34875ms
	W0729 10:42:27.156164    7462 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-320000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-320000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:42:27.163769    7462 out.go:177] 
	W0729 10:42:27.167864    7462 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 10:42:27.167944    7462 out.go:239] * 
	* 
	W0729 10:42:27.170490    7462 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 10:42:27.178816    7462 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-320000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-320000 -n ha-320000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-320000 -n ha-320000: exit status 7 (66.615208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-320000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-320000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-320000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-320000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-320000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-320000 -n ha-320000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-320000 -n ha-320000: exit status 7 (29.997917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-320000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-320000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-320000 --control-plane -v=7 --alsologtostderr: exit status 83 (41.411042ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-320000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-320000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:42:27.368764    7477 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:42:27.368919    7477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:42:27.368922    7477 out.go:304] Setting ErrFile to fd 2...
	I0729 10:42:27.368924    7477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:42:27.369042    7477 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 10:42:27.369281    7477 mustload.go:65] Loading cluster: ha-320000
	I0729 10:42:27.369473    7477 config.go:182] Loaded profile config "ha-320000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:42:27.374296    7477 out.go:177] * The control-plane node ha-320000 host is not running: state=Stopped
	I0729 10:42:27.378326    7477 out.go:177]   To start a cluster, run: "minikube start -p ha-320000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-320000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-320000 -n ha-320000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-320000 -n ha-320000: exit status 7 (29.210542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-320000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-320000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-320000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-320000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-320000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-320000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-320000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-320000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-320000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-320000 -n ha-320000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-320000 -n ha-320000: exit status 7 (30.275375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-320000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.08s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (10.01s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-059000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-059000 --driver=qemu2 : exit status 80 (9.940396125s)

                                                
                                                
-- stdout --
	* [image-059000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19339
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-059000" primary control-plane node in "image-059000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-059000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-059000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-059000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-059000 -n image-059000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-059000 -n image-059000: exit status 7 (67.787292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-059000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (10.01s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.91s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-657000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-657000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.9076975s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3dea0e35-d935-460d-ad86-df6ce20c1819","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-657000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"83fab9e8-55a6-4973-b030-9167e8d577a9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19339"}}
	{"specversion":"1.0","id":"09a0fdc5-2149-4c3b-96d3-377bc8e3a148","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig"}}
	{"specversion":"1.0","id":"46a297ed-a879-4dd5-a041-4d694c58cf99","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"433923ad-3701-493e-a806-0f8a3f49d006","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4dadd48b-cbdd-4ba6-9d6f-626678d9f7aa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube"}}
	{"specversion":"1.0","id":"d28426d2-0494-47ed-85fb-f4e7b5d37f72","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"82c91469-6baa-4c0a-b4e0-8ad8bd6124d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"6d01f407-f52d-45a7-90ca-35c321792252","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"038d6212-4ca1-4399-a7b6-76bbbeb510d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-657000\" primary control-plane node in \"json-output-657000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"f03b6c30-2654-4269-a0dd-eacf79fe723d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"342d80ce-3ebd-4cad-a98a-641cedf22a7e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-657000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"3236aa49-b9f8-4611-a183-aa4b68a55c07","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"625db60e-6b31-41fc-acfa-b7974cd1bd9d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"a6aa369f-6c89-481b-99dd-3472d902829e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-657000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"07301224-2daa-477c-a6af-17498c7b758a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"08a995f2-4812-4410-8ad8-7bf5600fc114","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-657000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.91s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-657000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-657000 --output=json --user=testUser: exit status 83 (77.725292ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"36aff74f-dba9-4cc1-a5d4-f31881a835e8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-657000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"468d21ce-e45a-465c-bab1-f769ce0fb6ee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-657000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-657000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-657000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-657000 --output=json --user=testUser: exit status 83 (45.266083ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-657000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-657000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-657000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-657000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.16s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-999000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-999000 --driver=qemu2 : exit status 80 (9.874634042s)

                                                
                                                
-- stdout --
	* [first-999000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19339
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-999000" primary control-plane node in "first-999000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-999000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-999000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-999000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-07-29 10:42:59.785106 -0700 PDT m=+481.808595167
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-000000 -n second-000000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-000000 -n second-000000: exit status 85 (75.32225ms)

                                                
                                                
-- stdout --
	* Profile "second-000000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-000000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-000000" host is not running, skipping log retrieval (state="* Profile \"second-000000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-000000\"")
helpers_test.go:175: Cleaning up "second-000000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-000000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-07-29 10:42:59.969926 -0700 PDT m=+481.993417876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-999000 -n first-999000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-999000 -n first-999000: exit status 7 (29.421625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-999000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-999000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-999000
--- FAIL: TestMinikubeProfile (10.16s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.06s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-001000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-001000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (9.993554084s)

                                                
                                                
-- stdout --
	* [mount-start-1-001000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19339
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-001000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-001000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-001000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-001000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-001000 -n mount-start-1-001000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-001000 -n mount-start-1-001000: exit status 7 (68.186875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-001000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.06s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-263000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-263000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.801721083s)

                                                
                                                
-- stdout --
	* [multinode-263000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19339
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-263000" primary control-plane node in "multinode-263000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-263000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:43:10.349255    7612 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:43:10.349373    7612 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:43:10.349376    7612 out.go:304] Setting ErrFile to fd 2...
	I0729 10:43:10.349378    7612 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:43:10.349521    7612 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 10:43:10.350538    7612 out.go:298] Setting JSON to false
	I0729 10:43:10.366806    7612 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4359,"bootTime":1722270631,"procs":450,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 10:43:10.366884    7612 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 10:43:10.372415    7612 out.go:177] * [multinode-263000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 10:43:10.379326    7612 out.go:177]   - MINIKUBE_LOCATION=19339
	I0729 10:43:10.379385    7612 notify.go:220] Checking for updates...
	I0729 10:43:10.387169    7612 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	I0729 10:43:10.390292    7612 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 10:43:10.393305    7612 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:43:10.396328    7612 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	I0729 10:43:10.399308    7612 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 10:43:10.402439    7612 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:43:10.406277    7612 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 10:43:10.413276    7612 start.go:297] selected driver: qemu2
	I0729 10:43:10.413283    7612 start.go:901] validating driver "qemu2" against <nil>
	I0729 10:43:10.413289    7612 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:43:10.415646    7612 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 10:43:10.418285    7612 out.go:177] * Automatically selected the socket_vmnet network
	I0729 10:43:10.422385    7612 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 10:43:10.422448    7612 cni.go:84] Creating CNI manager for ""
	I0729 10:43:10.422452    7612 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0729 10:43:10.422458    7612 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0729 10:43:10.422486    7612 start.go:340] cluster config:
	{Name:multinode-263000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-263000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:43:10.426256    7612 iso.go:125] acquiring lock: {Name:mk2808e0b9510c77af2c0862d3450f3cc996acba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:43:10.434300    7612 out.go:177] * Starting "multinode-263000" primary control-plane node in "multinode-263000" cluster
	I0729 10:43:10.438300    7612 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 10:43:10.438316    7612 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 10:43:10.438325    7612 cache.go:56] Caching tarball of preloaded images
	I0729 10:43:10.438398    7612 preload.go:172] Found /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 10:43:10.438404    7612 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 10:43:10.438629    7612 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/multinode-263000/config.json ...
	I0729 10:43:10.438641    7612 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/multinode-263000/config.json: {Name:mk55862f8765ac6eb00d329e5af5dad818427dbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:43:10.439045    7612 start.go:360] acquireMachinesLock for multinode-263000: {Name:mk9e50f97d2386769ca02400541d9347efa17b63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:43:10.439080    7612 start.go:364] duration metric: took 29.292µs to acquireMachinesLock for "multinode-263000"
	I0729 10:43:10.439092    7612 start.go:93] Provisioning new machine with config: &{Name:multinode-263000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:multinode-263000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:43:10.439120    7612 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 10:43:10.443314    7612 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 10:43:10.461016    7612 start.go:159] libmachine.API.Create for "multinode-263000" (driver="qemu2")
	I0729 10:43:10.461043    7612 client.go:168] LocalClient.Create starting
	I0729 10:43:10.461101    7612 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/ca.pem
	I0729 10:43:10.461136    7612 main.go:141] libmachine: Decoding PEM data...
	I0729 10:43:10.461145    7612 main.go:141] libmachine: Parsing certificate...
	I0729 10:43:10.461183    7612 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/cert.pem
	I0729 10:43:10.461210    7612 main.go:141] libmachine: Decoding PEM data...
	I0729 10:43:10.461219    7612 main.go:141] libmachine: Parsing certificate...
	I0729 10:43:10.461734    7612 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19339-6071/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0729 10:43:10.608941    7612 main.go:141] libmachine: Creating SSH key...
	I0729 10:43:10.658161    7612 main.go:141] libmachine: Creating Disk image...
	I0729 10:43:10.658166    7612 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 10:43:10.658361    7612 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/multinode-263000/disk.qcow2.raw /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/multinode-263000/disk.qcow2
	I0729 10:43:10.667619    7612 main.go:141] libmachine: STDOUT: 
	I0729 10:43:10.667640    7612 main.go:141] libmachine: STDERR: 
	I0729 10:43:10.667688    7612 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/multinode-263000/disk.qcow2 +20000M
	I0729 10:43:10.675542    7612 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 10:43:10.675567    7612 main.go:141] libmachine: STDERR: 
	I0729 10:43:10.675585    7612 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/multinode-263000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/multinode-263000/disk.qcow2
	I0729 10:43:10.675590    7612 main.go:141] libmachine: Starting QEMU VM...
	I0729 10:43:10.675600    7612 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:43:10.675625    7612 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/multinode-263000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/multinode-263000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/multinode-263000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:4f:66:ee:c5:fc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/multinode-263000/disk.qcow2
	I0729 10:43:10.677259    7612 main.go:141] libmachine: STDOUT: 
	I0729 10:43:10.677273    7612 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:43:10.677292    7612 client.go:171] duration metric: took 216.248584ms to LocalClient.Create
	I0729 10:43:12.679441    7612 start.go:128] duration metric: took 2.240337625s to createHost
	I0729 10:43:12.679485    7612 start.go:83] releasing machines lock for "multinode-263000", held for 2.240433375s
	W0729 10:43:12.679546    7612 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:43:12.694784    7612 out.go:177] * Deleting "multinode-263000" in qemu2 ...
	W0729 10:43:12.723740    7612 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:43:12.723777    7612 start.go:729] Will try again in 5 seconds ...
	I0729 10:43:17.725913    7612 start.go:360] acquireMachinesLock for multinode-263000: {Name:mk9e50f97d2386769ca02400541d9347efa17b63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:43:17.726374    7612 start.go:364] duration metric: took 338.875µs to acquireMachinesLock for "multinode-263000"
	I0729 10:43:17.726488    7612 start.go:93] Provisioning new machine with config: &{Name:multinode-263000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:multinode-263000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:43:17.726808    7612 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 10:43:17.736377    7612 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 10:43:17.787077    7612 start.go:159] libmachine.API.Create for "multinode-263000" (driver="qemu2")
	I0729 10:43:17.787148    7612 client.go:168] LocalClient.Create starting
	I0729 10:43:17.787273    7612 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/ca.pem
	I0729 10:43:17.787340    7612 main.go:141] libmachine: Decoding PEM data...
	I0729 10:43:17.787361    7612 main.go:141] libmachine: Parsing certificate...
	I0729 10:43:17.787416    7612 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/cert.pem
	I0729 10:43:17.787461    7612 main.go:141] libmachine: Decoding PEM data...
	I0729 10:43:17.787474    7612 main.go:141] libmachine: Parsing certificate...
	I0729 10:43:17.788003    7612 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19339-6071/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0729 10:43:17.945795    7612 main.go:141] libmachine: Creating SSH key...
	I0729 10:43:18.061353    7612 main.go:141] libmachine: Creating Disk image...
	I0729 10:43:18.061358    7612 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 10:43:18.061566    7612 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/multinode-263000/disk.qcow2.raw /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/multinode-263000/disk.qcow2
	I0729 10:43:18.070774    7612 main.go:141] libmachine: STDOUT: 
	I0729 10:43:18.070796    7612 main.go:141] libmachine: STDERR: 
	I0729 10:43:18.070837    7612 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/multinode-263000/disk.qcow2 +20000M
	I0729 10:43:18.078626    7612 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 10:43:18.078642    7612 main.go:141] libmachine: STDERR: 
	I0729 10:43:18.078652    7612 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/multinode-263000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/multinode-263000/disk.qcow2
	I0729 10:43:18.078657    7612 main.go:141] libmachine: Starting QEMU VM...
	I0729 10:43:18.078667    7612 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:43:18.078705    7612 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/multinode-263000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/multinode-263000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/multinode-263000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:cd:d5:8b:ec:7c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/multinode-263000/disk.qcow2
	I0729 10:43:18.080351    7612 main.go:141] libmachine: STDOUT: 
	I0729 10:43:18.080368    7612 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:43:18.080381    7612 client.go:171] duration metric: took 293.233625ms to LocalClient.Create
	I0729 10:43:20.082526    7612 start.go:128] duration metric: took 2.355731167s to createHost
	I0729 10:43:20.082577    7612 start.go:83] releasing machines lock for "multinode-263000", held for 2.356217583s
	W0729 10:43:20.082989    7612 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-263000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-263000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:43:20.091611    7612 out.go:177] 
	W0729 10:43:20.097690    7612 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 10:43:20.097766    7612 out.go:239] * 
	* 
	W0729 10:43:20.100565    7612 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 10:43:20.108661    7612 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-263000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-263000 -n multinode-263000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-263000 -n multinode-263000: exit status 7 (66.932709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-263000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.87s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (96.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-263000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-263000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (60.8295ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-263000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-263000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-263000 -- rollout status deployment/busybox: exit status 1 (55.640792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-263000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-263000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-263000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (56.829042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-263000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-263000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-263000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.660875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-263000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-263000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-263000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.761584ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-263000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-263000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-263000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.653792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-263000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-263000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-263000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.448916ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-263000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-263000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-263000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.129208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-263000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-263000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-263000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.080666ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-263000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-263000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-263000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.637208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-263000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-263000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-263000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.032917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-263000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-263000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-263000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.46575ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-263000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-263000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-263000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.255833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-263000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-263000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-263000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.081333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-263000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-263000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-263000 -- exec  -- nslookup kubernetes.io: exit status 1 (56.2685ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-263000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-263000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-263000 -- exec  -- nslookup kubernetes.default: exit status 1 (55.624875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-263000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-263000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-263000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (56.351541ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-263000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-263000 -n multinode-263000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-263000 -n multinode-263000: exit status 7 (29.683167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-263000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (96.57s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-263000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-263000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.576917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-263000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-263000 -n multinode-263000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-263000 -n multinode-263000: exit status 7 (29.563875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-263000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-263000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-263000 -v 3 --alsologtostderr: exit status 83 (43.065125ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-263000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-263000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:44:56.870806    7693 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:44:56.870977    7693 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:44:56.870980    7693 out.go:304] Setting ErrFile to fd 2...
	I0729 10:44:56.870982    7693 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:44:56.871135    7693 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 10:44:56.871378    7693 mustload.go:65] Loading cluster: multinode-263000
	I0729 10:44:56.871567    7693 config.go:182] Loaded profile config "multinode-263000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:44:56.876124    7693 out.go:177] * The control-plane node multinode-263000 host is not running: state=Stopped
	I0729 10:44:56.880102    7693 out.go:177]   To start a cluster, run: "minikube start -p multinode-263000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-263000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-263000 -n multinode-263000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-263000 -n multinode-263000: exit status 7 (29.262084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-263000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-263000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-263000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.619667ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-263000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-263000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-263000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-263000 -n multinode-263000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-263000 -n multinode-263000: exit status 7 (30.113917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-263000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-263000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-263000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-263000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"multinode-263000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-263000 -n multinode-263000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-263000 -n multinode-263000: exit status 7 (30.162875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-263000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-263000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-263000 status --output json --alsologtostderr: exit status 7 (29.7205ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-263000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:44:57.076591    7705 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:44:57.076730    7705 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:44:57.076733    7705 out.go:304] Setting ErrFile to fd 2...
	I0729 10:44:57.076735    7705 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:44:57.076877    7705 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 10:44:57.076996    7705 out.go:298] Setting JSON to true
	I0729 10:44:57.077006    7705 mustload.go:65] Loading cluster: multinode-263000
	I0729 10:44:57.077063    7705 notify.go:220] Checking for updates...
	I0729 10:44:57.077213    7705 config.go:182] Loaded profile config "multinode-263000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:44:57.077219    7705 status.go:255] checking status of multinode-263000 ...
	I0729 10:44:57.077419    7705 status.go:330] multinode-263000 host status = "Stopped" (err=<nil>)
	I0729 10:44:57.077423    7705 status.go:343] host is not running, skipping remaining checks
	I0729 10:44:57.077425    7705 status.go:257] multinode-263000 status: &{Name:multinode-263000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-263000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-263000 -n multinode-263000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-263000 -n multinode-263000: exit status 7 (29.942459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-263000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-263000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-263000 node stop m03: exit status 85 (47.091958ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-263000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-263000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-263000 status: exit status 7 (29.317125ms)

                                                
                                                
-- stdout --
	multinode-263000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-263000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-263000 status --alsologtostderr: exit status 7 (30.156917ms)

                                                
                                                
-- stdout --
	multinode-263000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:44:57.213895    7713 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:44:57.214032    7713 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:44:57.214035    7713 out.go:304] Setting ErrFile to fd 2...
	I0729 10:44:57.214038    7713 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:44:57.214173    7713 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 10:44:57.214281    7713 out.go:298] Setting JSON to false
	I0729 10:44:57.214291    7713 mustload.go:65] Loading cluster: multinode-263000
	I0729 10:44:57.214340    7713 notify.go:220] Checking for updates...
	I0729 10:44:57.214503    7713 config.go:182] Loaded profile config "multinode-263000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:44:57.214510    7713 status.go:255] checking status of multinode-263000 ...
	I0729 10:44:57.214706    7713 status.go:330] multinode-263000 host status = "Stopped" (err=<nil>)
	I0729 10:44:57.214709    7713 status.go:343] host is not running, skipping remaining checks
	I0729 10:44:57.214711    7713 status.go:257] multinode-263000 status: &{Name:multinode-263000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-263000 status --alsologtostderr": multinode-263000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-263000 -n multinode-263000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-263000 -n multinode-263000: exit status 7 (29.851417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-263000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (47.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-263000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-263000 node start m03 -v=7 --alsologtostderr: exit status 85 (47.071625ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:44:57.273529    7717 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:44:57.274083    7717 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:44:57.274091    7717 out.go:304] Setting ErrFile to fd 2...
	I0729 10:44:57.274102    7717 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:44:57.274263    7717 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 10:44:57.274515    7717 mustload.go:65] Loading cluster: multinode-263000
	I0729 10:44:57.274700    7717 config.go:182] Loaded profile config "multinode-263000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:44:57.278097    7717 out.go:177] 
	W0729 10:44:57.282103    7717 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0729 10:44:57.282108    7717 out.go:239] * 
	* 
	W0729 10:44:57.284097    7717 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 10:44:57.288144    7717 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0729 10:44:57.273529    7717 out.go:291] Setting OutFile to fd 1 ...
I0729 10:44:57.274083    7717 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 10:44:57.274091    7717 out.go:304] Setting ErrFile to fd 2...
I0729 10:44:57.274102    7717 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 10:44:57.274263    7717 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
I0729 10:44:57.274515    7717 mustload.go:65] Loading cluster: multinode-263000
I0729 10:44:57.274700    7717 config.go:182] Loaded profile config "multinode-263000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 10:44:57.278097    7717 out.go:177] 
W0729 10:44:57.282103    7717 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0729 10:44:57.282108    7717 out.go:239] * 
* 
W0729 10:44:57.284097    7717 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0729 10:44:57.288144    7717 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-263000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-263000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-263000 status -v=7 --alsologtostderr: exit status 7 (30.476083ms)

                                                
                                                
-- stdout --
	multinode-263000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:44:57.321786    7719 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:44:57.321930    7719 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:44:57.321933    7719 out.go:304] Setting ErrFile to fd 2...
	I0729 10:44:57.321936    7719 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:44:57.322084    7719 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 10:44:57.322200    7719 out.go:298] Setting JSON to false
	I0729 10:44:57.322209    7719 mustload.go:65] Loading cluster: multinode-263000
	I0729 10:44:57.322260    7719 notify.go:220] Checking for updates...
	I0729 10:44:57.322413    7719 config.go:182] Loaded profile config "multinode-263000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:44:57.322420    7719 status.go:255] checking status of multinode-263000 ...
	I0729 10:44:57.322619    7719 status.go:330] multinode-263000 host status = "Stopped" (err=<nil>)
	I0729 10:44:57.322623    7719 status.go:343] host is not running, skipping remaining checks
	I0729 10:44:57.322625    7719 status.go:257] multinode-263000 status: &{Name:multinode-263000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-263000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-263000 status -v=7 --alsologtostderr: exit status 7 (76.076334ms)

                                                
                                                
-- stdout --
	multinode-263000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:44:58.804720    7721 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:44:58.804980    7721 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:44:58.804985    7721 out.go:304] Setting ErrFile to fd 2...
	I0729 10:44:58.804989    7721 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:44:58.805202    7721 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 10:44:58.805362    7721 out.go:298] Setting JSON to false
	I0729 10:44:58.805376    7721 mustload.go:65] Loading cluster: multinode-263000
	I0729 10:44:58.805420    7721 notify.go:220] Checking for updates...
	I0729 10:44:58.805649    7721 config.go:182] Loaded profile config "multinode-263000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:44:58.805664    7721 status.go:255] checking status of multinode-263000 ...
	I0729 10:44:58.805992    7721 status.go:330] multinode-263000 host status = "Stopped" (err=<nil>)
	I0729 10:44:58.805998    7721 status.go:343] host is not running, skipping remaining checks
	I0729 10:44:58.806001    7721 status.go:257] multinode-263000 status: &{Name:multinode-263000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-263000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-263000 status -v=7 --alsologtostderr: exit status 7 (75.277542ms)

                                                
                                                
-- stdout --
	multinode-263000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:45:00.997009    7723 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:45:00.997559    7723 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:45:00.997582    7723 out.go:304] Setting ErrFile to fd 2...
	I0729 10:45:00.997591    7723 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:45:00.998224    7723 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 10:45:00.998518    7723 out.go:298] Setting JSON to false
	I0729 10:45:00.998534    7723 mustload.go:65] Loading cluster: multinode-263000
	I0729 10:45:00.998562    7723 notify.go:220] Checking for updates...
	I0729 10:45:00.998796    7723 config.go:182] Loaded profile config "multinode-263000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:45:00.998805    7723 status.go:255] checking status of multinode-263000 ...
	I0729 10:45:00.999067    7723 status.go:330] multinode-263000 host status = "Stopped" (err=<nil>)
	I0729 10:45:00.999072    7723 status.go:343] host is not running, skipping remaining checks
	I0729 10:45:00.999075    7723 status.go:257] multinode-263000 status: &{Name:multinode-263000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-263000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-263000 status -v=7 --alsologtostderr: exit status 7 (51.105ms)

                                                
                                                
-- stdout --
	multinode-263000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:45:03.801410    7725 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:45:03.801718    7725 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:45:03.801725    7725 out.go:304] Setting ErrFile to fd 2...
	I0729 10:45:03.801729    7725 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:45:03.801971    7725 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 10:45:03.802218    7725 out.go:298] Setting JSON to false
	I0729 10:45:03.802235    7725 mustload.go:65] Loading cluster: multinode-263000
	I0729 10:45:03.802289    7725 notify.go:220] Checking for updates...
	I0729 10:45:03.802664    7725 config.go:182] Loaded profile config "multinode-263000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:45:03.802676    7725 status.go:255] checking status of multinode-263000 ...
	I0729 10:45:03.803101    7725 status.go:330] multinode-263000 host status = "Stopped" (err=<nil>)
	I0729 10:45:03.803108    7725 status.go:343] host is not running, skipping remaining checks
	I0729 10:45:03.803113    7725 status.go:257] multinode-263000 status: &{Name:multinode-263000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-263000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-263000 status -v=7 --alsologtostderr: exit status 7 (73.04025ms)

                                                
                                                
-- stdout --
	multinode-263000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:45:08.153447    7728 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:45:08.153686    7728 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:45:08.153691    7728 out.go:304] Setting ErrFile to fd 2...
	I0729 10:45:08.153694    7728 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:45:08.153881    7728 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 10:45:08.154039    7728 out.go:298] Setting JSON to false
	I0729 10:45:08.154055    7728 mustload.go:65] Loading cluster: multinode-263000
	I0729 10:45:08.154094    7728 notify.go:220] Checking for updates...
	I0729 10:45:08.154344    7728 config.go:182] Loaded profile config "multinode-263000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:45:08.154355    7728 status.go:255] checking status of multinode-263000 ...
	I0729 10:45:08.154634    7728 status.go:330] multinode-263000 host status = "Stopped" (err=<nil>)
	I0729 10:45:08.154639    7728 status.go:343] host is not running, skipping remaining checks
	I0729 10:45:08.154642    7728 status.go:257] multinode-263000 status: &{Name:multinode-263000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-263000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-263000 status -v=7 --alsologtostderr: exit status 7 (73.676125ms)

                                                
                                                
-- stdout --
	multinode-263000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:45:12.852979    7730 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:45:12.853207    7730 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:45:12.853211    7730 out.go:304] Setting ErrFile to fd 2...
	I0729 10:45:12.853214    7730 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:45:12.853395    7730 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 10:45:12.853537    7730 out.go:298] Setting JSON to false
	I0729 10:45:12.853549    7730 mustload.go:65] Loading cluster: multinode-263000
	I0729 10:45:12.853590    7730 notify.go:220] Checking for updates...
	I0729 10:45:12.853816    7730 config.go:182] Loaded profile config "multinode-263000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:45:12.853824    7730 status.go:255] checking status of multinode-263000 ...
	I0729 10:45:12.854135    7730 status.go:330] multinode-263000 host status = "Stopped" (err=<nil>)
	I0729 10:45:12.854140    7730 status.go:343] host is not running, skipping remaining checks
	I0729 10:45:12.854143    7730 status.go:257] multinode-263000 status: &{Name:multinode-263000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-263000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-263000 status -v=7 --alsologtostderr: exit status 7 (73.385083ms)

                                                
                                                
-- stdout --
	multinode-263000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:45:21.690327    7732 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:45:21.690579    7732 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:45:21.690583    7732 out.go:304] Setting ErrFile to fd 2...
	I0729 10:45:21.690586    7732 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:45:21.690771    7732 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 10:45:21.690921    7732 out.go:298] Setting JSON to false
	I0729 10:45:21.690933    7732 mustload.go:65] Loading cluster: multinode-263000
	I0729 10:45:21.690979    7732 notify.go:220] Checking for updates...
	I0729 10:45:21.691185    7732 config.go:182] Loaded profile config "multinode-263000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:45:21.691193    7732 status.go:255] checking status of multinode-263000 ...
	I0729 10:45:21.691491    7732 status.go:330] multinode-263000 host status = "Stopped" (err=<nil>)
	I0729 10:45:21.691496    7732 status.go:343] host is not running, skipping remaining checks
	I0729 10:45:21.691499    7732 status.go:257] multinode-263000 status: &{Name:multinode-263000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-263000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-263000 status -v=7 --alsologtostderr: exit status 7 (71.905125ms)

                                                
                                                
-- stdout --
	multinode-263000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:45:30.076789    7734 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:45:30.076980    7734 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:45:30.076985    7734 out.go:304] Setting ErrFile to fd 2...
	I0729 10:45:30.076988    7734 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:45:30.077148    7734 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 10:45:30.077301    7734 out.go:298] Setting JSON to false
	I0729 10:45:30.077313    7734 mustload.go:65] Loading cluster: multinode-263000
	I0729 10:45:30.077352    7734 notify.go:220] Checking for updates...
	I0729 10:45:30.077597    7734 config.go:182] Loaded profile config "multinode-263000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:45:30.077607    7734 status.go:255] checking status of multinode-263000 ...
	I0729 10:45:30.077891    7734 status.go:330] multinode-263000 host status = "Stopped" (err=<nil>)
	I0729 10:45:30.077896    7734 status.go:343] host is not running, skipping remaining checks
	I0729 10:45:30.077899    7734 status.go:257] multinode-263000 status: &{Name:multinode-263000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-263000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-263000 status -v=7 --alsologtostderr: exit status 7 (66.926459ms)

                                                
                                                
-- stdout --
	multinode-263000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:45:44.707457    7736 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:45:44.707722    7736 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:45:44.707727    7736 out.go:304] Setting ErrFile to fd 2...
	I0729 10:45:44.707731    7736 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:45:44.707920    7736 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 10:45:44.708110    7736 out.go:298] Setting JSON to false
	I0729 10:45:44.708125    7736 mustload.go:65] Loading cluster: multinode-263000
	I0729 10:45:44.708178    7736 notify.go:220] Checking for updates...
	I0729 10:45:44.708418    7736 config.go:182] Loaded profile config "multinode-263000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:45:44.708427    7736 status.go:255] checking status of multinode-263000 ...
	I0729 10:45:44.708756    7736 status.go:330] multinode-263000 host status = "Stopped" (err=<nil>)
	I0729 10:45:44.708762    7736 status.go:343] host is not running, skipping remaining checks
	I0729 10:45:44.708765    7736 status.go:257] multinode-263000 status: &{Name:multinode-263000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-263000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-263000 -n multinode-263000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-263000 -n multinode-263000: exit status 7 (34.283042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-263000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (47.50s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (7.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-263000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-263000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-263000: (1.842293542s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-263000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-263000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.222280334s)

                                                
                                                
-- stdout --
	* [multinode-263000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19339
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-263000" primary control-plane node in "multinode-263000" cluster
	* Restarting existing qemu2 VM for "multinode-263000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-263000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:45:46.680006    7752 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:45:46.680186    7752 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:45:46.680191    7752 out.go:304] Setting ErrFile to fd 2...
	I0729 10:45:46.680194    7752 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:45:46.680384    7752 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 10:45:46.681616    7752 out.go:298] Setting JSON to false
	I0729 10:45:46.700946    7752 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4515,"bootTime":1722270631,"procs":449,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 10:45:46.701015    7752 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 10:45:46.705647    7752 out.go:177] * [multinode-263000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 10:45:46.712587    7752 out.go:177]   - MINIKUBE_LOCATION=19339
	I0729 10:45:46.712620    7752 notify.go:220] Checking for updates...
	I0729 10:45:46.720587    7752 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	I0729 10:45:46.723519    7752 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 10:45:46.726522    7752 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:45:46.729584    7752 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	I0729 10:45:46.732560    7752 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 10:45:46.735892    7752 config.go:182] Loaded profile config "multinode-263000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:45:46.735958    7752 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:45:46.740523    7752 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 10:45:46.747552    7752 start.go:297] selected driver: qemu2
	I0729 10:45:46.747559    7752 start.go:901] validating driver "qemu2" against &{Name:multinode-263000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:multinode-263000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:45:46.747637    7752 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:45:46.750032    7752 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 10:45:46.750073    7752 cni.go:84] Creating CNI manager for ""
	I0729 10:45:46.750079    7752 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0729 10:45:46.750133    7752 start.go:340] cluster config:
	{Name:multinode-263000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-263000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:45:46.753709    7752 iso.go:125] acquiring lock: {Name:mk2808e0b9510c77af2c0862d3450f3cc996acba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:45:46.761548    7752 out.go:177] * Starting "multinode-263000" primary control-plane node in "multinode-263000" cluster
	I0729 10:45:46.765558    7752 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 10:45:46.765573    7752 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 10:45:46.765585    7752 cache.go:56] Caching tarball of preloaded images
	I0729 10:45:46.765644    7752 preload.go:172] Found /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 10:45:46.765653    7752 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 10:45:46.765727    7752 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/multinode-263000/config.json ...
	I0729 10:45:46.766190    7752 start.go:360] acquireMachinesLock for multinode-263000: {Name:mk9e50f97d2386769ca02400541d9347efa17b63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:45:46.766225    7752 start.go:364] duration metric: took 28.333µs to acquireMachinesLock for "multinode-263000"
	I0729 10:45:46.766234    7752 start.go:96] Skipping create...Using existing machine configuration
	I0729 10:45:46.766240    7752 fix.go:54] fixHost starting: 
	I0729 10:45:46.766366    7752 fix.go:112] recreateIfNeeded on multinode-263000: state=Stopped err=<nil>
	W0729 10:45:46.766376    7752 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 10:45:46.774561    7752 out.go:177] * Restarting existing qemu2 VM for "multinode-263000" ...
	I0729 10:45:46.778390    7752 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:45:46.778432    7752 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/multinode-263000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/multinode-263000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/multinode-263000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:cd:d5:8b:ec:7c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/multinode-263000/disk.qcow2
	I0729 10:45:46.780576    7752 main.go:141] libmachine: STDOUT: 
	I0729 10:45:46.780600    7752 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:45:46.780628    7752 fix.go:56] duration metric: took 14.387542ms for fixHost
	I0729 10:45:46.780633    7752 start.go:83] releasing machines lock for "multinode-263000", held for 14.404584ms
	W0729 10:45:46.780638    7752 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 10:45:46.780665    7752 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:45:46.780670    7752 start.go:729] Will try again in 5 seconds ...
	I0729 10:45:51.782776    7752 start.go:360] acquireMachinesLock for multinode-263000: {Name:mk9e50f97d2386769ca02400541d9347efa17b63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:45:51.783173    7752 start.go:364] duration metric: took 317.917µs to acquireMachinesLock for "multinode-263000"
	I0729 10:45:51.783294    7752 start.go:96] Skipping create...Using existing machine configuration
	I0729 10:45:51.783333    7752 fix.go:54] fixHost starting: 
	I0729 10:45:51.784014    7752 fix.go:112] recreateIfNeeded on multinode-263000: state=Stopped err=<nil>
	W0729 10:45:51.784038    7752 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 10:45:51.791363    7752 out.go:177] * Restarting existing qemu2 VM for "multinode-263000" ...
	I0729 10:45:51.795395    7752 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:45:51.795593    7752 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/multinode-263000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/multinode-263000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/multinode-263000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:cd:d5:8b:ec:7c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/multinode-263000/disk.qcow2
	I0729 10:45:51.804547    7752 main.go:141] libmachine: STDOUT: 
	I0729 10:45:51.804600    7752 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:45:51.804656    7752 fix.go:56] duration metric: took 21.347292ms for fixHost
	I0729 10:45:51.804698    7752 start.go:83] releasing machines lock for "multinode-263000", held for 21.487417ms
	W0729 10:45:51.804836    7752 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-263000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-263000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:45:51.812335    7752 out.go:177] 
	W0729 10:45:51.815402    7752 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 10:45:51.815490    7752 out.go:239] * 
	* 
	W0729 10:45:51.817955    7752 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 10:45:51.825327    7752 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-263000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-263000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-263000 -n multinode-263000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-263000 -n multinode-263000: exit status 7 (31.9525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-263000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (7.20s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-263000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-263000 node delete m03: exit status 83 (40.408209ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-263000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-263000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-263000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-263000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-263000 status --alsologtostderr: exit status 7 (29.679208ms)

                                                
                                                
-- stdout --
	multinode-263000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:45:52.009041    7766 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:45:52.009201    7766 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:45:52.009205    7766 out.go:304] Setting ErrFile to fd 2...
	I0729 10:45:52.009207    7766 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:45:52.009343    7766 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 10:45:52.009457    7766 out.go:298] Setting JSON to false
	I0729 10:45:52.009467    7766 mustload.go:65] Loading cluster: multinode-263000
	I0729 10:45:52.009529    7766 notify.go:220] Checking for updates...
	I0729 10:45:52.009665    7766 config.go:182] Loaded profile config "multinode-263000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:45:52.009672    7766 status.go:255] checking status of multinode-263000 ...
	I0729 10:45:52.009883    7766 status.go:330] multinode-263000 host status = "Stopped" (err=<nil>)
	I0729 10:45:52.009887    7766 status.go:343] host is not running, skipping remaining checks
	I0729 10:45:52.009889    7766 status.go:257] multinode-263000 status: &{Name:multinode-263000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-263000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-263000 -n multinode-263000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-263000 -n multinode-263000: exit status 7 (29.866167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-263000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (4.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-263000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-263000 stop: (3.988470458s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-263000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-263000 status: exit status 7 (66.382375ms)

                                                
                                                
-- stdout --
	multinode-263000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-263000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-263000 status --alsologtostderr: exit status 7 (32.853292ms)

                                                
                                                
-- stdout --
	multinode-263000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:45:56.127069    7794 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:45:56.127221    7794 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:45:56.127224    7794 out.go:304] Setting ErrFile to fd 2...
	I0729 10:45:56.127226    7794 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:45:56.127354    7794 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 10:45:56.127492    7794 out.go:298] Setting JSON to false
	I0729 10:45:56.127504    7794 mustload.go:65] Loading cluster: multinode-263000
	I0729 10:45:56.127565    7794 notify.go:220] Checking for updates...
	I0729 10:45:56.127711    7794 config.go:182] Loaded profile config "multinode-263000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:45:56.127722    7794 status.go:255] checking status of multinode-263000 ...
	I0729 10:45:56.127951    7794 status.go:330] multinode-263000 host status = "Stopped" (err=<nil>)
	I0729 10:45:56.127955    7794 status.go:343] host is not running, skipping remaining checks
	I0729 10:45:56.127957    7794 status.go:257] multinode-263000 status: &{Name:multinode-263000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-263000 status --alsologtostderr": multinode-263000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-263000 status --alsologtostderr": multinode-263000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-263000 -n multinode-263000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-263000 -n multinode-263000: exit status 7 (30.5315ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-263000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (4.12s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-263000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-263000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.18056025s)

                                                
                                                
-- stdout --
	* [multinode-263000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19339
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-263000" primary control-plane node in "multinode-263000" cluster
	* Restarting existing qemu2 VM for "multinode-263000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-263000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:45:56.186572    7798 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:45:56.186699    7798 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:45:56.186702    7798 out.go:304] Setting ErrFile to fd 2...
	I0729 10:45:56.186704    7798 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:45:56.186838    7798 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 10:45:56.187888    7798 out.go:298] Setting JSON to false
	I0729 10:45:56.203881    7798 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4525,"bootTime":1722270631,"procs":451,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 10:45:56.203945    7798 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 10:45:56.207824    7798 out.go:177] * [multinode-263000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 10:45:56.215934    7798 out.go:177]   - MINIKUBE_LOCATION=19339
	I0729 10:45:56.216040    7798 notify.go:220] Checking for updates...
	I0729 10:45:56.223749    7798 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	I0729 10:45:56.225062    7798 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 10:45:56.228742    7798 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:45:56.231763    7798 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	I0729 10:45:56.233074    7798 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 10:45:56.236046    7798 config.go:182] Loaded profile config "multinode-263000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:45:56.236301    7798 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:45:56.240727    7798 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 10:45:56.245764    7798 start.go:297] selected driver: qemu2
	I0729 10:45:56.245773    7798 start.go:901] validating driver "qemu2" against &{Name:multinode-263000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:multinode-263000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:45:56.245859    7798 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:45:56.248102    7798 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 10:45:56.248121    7798 cni.go:84] Creating CNI manager for ""
	I0729 10:45:56.248129    7798 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0729 10:45:56.248180    7798 start.go:340] cluster config:
	{Name:multinode-263000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-263000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:45:56.251555    7798 iso.go:125] acquiring lock: {Name:mk2808e0b9510c77af2c0862d3450f3cc996acba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:45:56.260698    7798 out.go:177] * Starting "multinode-263000" primary control-plane node in "multinode-263000" cluster
	I0729 10:45:56.264768    7798 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 10:45:56.264787    7798 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 10:45:56.264797    7798 cache.go:56] Caching tarball of preloaded images
	I0729 10:45:56.264861    7798 preload.go:172] Found /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 10:45:56.264868    7798 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 10:45:56.264928    7798 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/multinode-263000/config.json ...
	I0729 10:45:56.265398    7798 start.go:360] acquireMachinesLock for multinode-263000: {Name:mk9e50f97d2386769ca02400541d9347efa17b63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:45:56.265428    7798 start.go:364] duration metric: took 24.042µs to acquireMachinesLock for "multinode-263000"
	I0729 10:45:56.265438    7798 start.go:96] Skipping create...Using existing machine configuration
	I0729 10:45:56.265445    7798 fix.go:54] fixHost starting: 
	I0729 10:45:56.265568    7798 fix.go:112] recreateIfNeeded on multinode-263000: state=Stopped err=<nil>
	W0729 10:45:56.265576    7798 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 10:45:56.269711    7798 out.go:177] * Restarting existing qemu2 VM for "multinode-263000" ...
	I0729 10:45:56.277764    7798 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:45:56.277800    7798 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/multinode-263000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/multinode-263000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/multinode-263000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:cd:d5:8b:ec:7c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/multinode-263000/disk.qcow2
	I0729 10:45:56.279949    7798 main.go:141] libmachine: STDOUT: 
	I0729 10:45:56.279971    7798 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:45:56.280003    7798 fix.go:56] duration metric: took 14.558542ms for fixHost
	I0729 10:45:56.280009    7798 start.go:83] releasing machines lock for "multinode-263000", held for 14.576625ms
	W0729 10:45:56.280017    7798 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 10:45:56.280058    7798 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:45:56.280063    7798 start.go:729] Will try again in 5 seconds ...
	I0729 10:46:01.282166    7798 start.go:360] acquireMachinesLock for multinode-263000: {Name:mk9e50f97d2386769ca02400541d9347efa17b63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:46:01.282673    7798 start.go:364] duration metric: took 373.042µs to acquireMachinesLock for "multinode-263000"
	I0729 10:46:01.282790    7798 start.go:96] Skipping create...Using existing machine configuration
	I0729 10:46:01.282808    7798 fix.go:54] fixHost starting: 
	I0729 10:46:01.283525    7798 fix.go:112] recreateIfNeeded on multinode-263000: state=Stopped err=<nil>
	W0729 10:46:01.283551    7798 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 10:46:01.290885    7798 out.go:177] * Restarting existing qemu2 VM for "multinode-263000" ...
	I0729 10:46:01.294748    7798 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:46:01.295004    7798 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/multinode-263000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/multinode-263000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/multinode-263000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:cd:d5:8b:ec:7c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/multinode-263000/disk.qcow2
	I0729 10:46:01.304076    7798 main.go:141] libmachine: STDOUT: 
	I0729 10:46:01.304151    7798 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:46:01.304236    7798 fix.go:56] duration metric: took 21.425833ms for fixHost
	I0729 10:46:01.304260    7798 start.go:83] releasing machines lock for "multinode-263000", held for 21.548334ms
	W0729 10:46:01.304493    7798 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-263000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-263000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:46:01.311920    7798 out.go:177] 
	W0729 10:46:01.315922    7798 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 10:46:01.315966    7798 out.go:239] * 
	* 
	W0729 10:46:01.318619    7798 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 10:46:01.326880    7798 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-263000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-263000 -n multinode-263000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-263000 -n multinode-263000: exit status 7 (68.294834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-263000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-263000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-263000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-263000-m01 --driver=qemu2 : exit status 80 (9.963358667s)

                                                
                                                
-- stdout --
	* [multinode-263000-m01] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19339
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-263000-m01" primary control-plane node in "multinode-263000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-263000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-263000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-263000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-263000-m02 --driver=qemu2 : exit status 80 (10.028877708s)

                                                
                                                
-- stdout --
	* [multinode-263000-m02] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19339
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-263000-m02" primary control-plane node in "multinode-263000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-263000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-263000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-263000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-263000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-263000: exit status 83 (80.437875ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-263000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-263000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-263000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-263000 -n multinode-263000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-263000 -n multinode-263000: exit status 7 (29.324333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-263000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.22s)

                                                
                                    
x
+
TestPreload (10.05s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-399000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-399000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.9034295s)

                                                
                                                
-- stdout --
	* [test-preload-399000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19339
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-399000" primary control-plane node in "test-preload-399000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-399000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:46:21.773001    7852 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:46:21.773123    7852 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:46:21.773126    7852 out.go:304] Setting ErrFile to fd 2...
	I0729 10:46:21.773128    7852 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:46:21.773259    7852 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 10:46:21.774312    7852 out.go:298] Setting JSON to false
	I0729 10:46:21.790315    7852 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4550,"bootTime":1722270631,"procs":450,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 10:46:21.790381    7852 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 10:46:21.796449    7852 out.go:177] * [test-preload-399000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 10:46:21.804392    7852 out.go:177]   - MINIKUBE_LOCATION=19339
	I0729 10:46:21.804442    7852 notify.go:220] Checking for updates...
	I0729 10:46:21.812334    7852 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	I0729 10:46:21.815389    7852 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 10:46:21.819197    7852 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:46:21.822355    7852 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	I0729 10:46:21.825348    7852 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 10:46:21.828673    7852 config.go:182] Loaded profile config "multinode-263000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:46:21.828729    7852 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:46:21.833339    7852 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 10:46:21.840338    7852 start.go:297] selected driver: qemu2
	I0729 10:46:21.840343    7852 start.go:901] validating driver "qemu2" against <nil>
	I0729 10:46:21.840348    7852 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:46:21.842628    7852 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 10:46:21.846292    7852 out.go:177] * Automatically selected the socket_vmnet network
	I0729 10:46:21.849437    7852 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 10:46:21.849460    7852 cni.go:84] Creating CNI manager for ""
	I0729 10:46:21.849469    7852 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 10:46:21.849480    7852 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 10:46:21.849509    7852 start.go:340] cluster config:
	{Name:test-preload-399000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-399000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:46:21.853169    7852 iso.go:125] acquiring lock: {Name:mk2808e0b9510c77af2c0862d3450f3cc996acba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:46:21.861332    7852 out.go:177] * Starting "test-preload-399000" primary control-plane node in "test-preload-399000" cluster
	I0729 10:46:21.865372    7852 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0729 10:46:21.865467    7852 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/test-preload-399000/config.json ...
	I0729 10:46:21.865490    7852 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/test-preload-399000/config.json: {Name:mkf67600f66d0d2a4d59bbb756fdf6283f49004f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:46:21.865510    7852 cache.go:107] acquiring lock: {Name:mk999e4e69584c4a64cb49ec9e99877f268d7913 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:46:21.865522    7852 cache.go:107] acquiring lock: {Name:mkec979828fdc8fb4bfecab9a43cad8da369e87b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:46:21.865530    7852 cache.go:107] acquiring lock: {Name:mk30460561c28ec3bc09db514f93d755cadbec75 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:46:21.865729    7852 cache.go:107] acquiring lock: {Name:mk2fa0dcb10d8f11941d77d36a6ff2d05df19962 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:46:21.865698    7852 cache.go:107] acquiring lock: {Name:mk7d6e87271c0427a6105e31237dc7a5111ff0ee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:46:21.865759    7852 cache.go:107] acquiring lock: {Name:mk0487503b89c218a9d31f654bdc31a69ed19984 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:46:21.865816    7852 cache.go:107] acquiring lock: {Name:mk9e517c40b1320685a522e2d7454ddae0bab579 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:46:21.865916    7852 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 10:46:21.865932    7852 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0729 10:46:21.865949    7852 start.go:360] acquireMachinesLock for test-preload-399000: {Name:mk9e50f97d2386769ca02400541d9347efa17b63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:46:21.865969    7852 cache.go:107] acquiring lock: {Name:mk4de046f3285b9475879648124a792e7131225b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:46:21.866009    7852 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0729 10:46:21.866029    7852 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0729 10:46:21.866050    7852 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0729 10:46:21.866072    7852 start.go:364] duration metric: took 112µs to acquireMachinesLock for "test-preload-399000"
	I0729 10:46:21.866114    7852 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0729 10:46:21.866189    7852 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 10:46:21.866144    7852 start.go:93] Provisioning new machine with config: &{Name:test-preload-399000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-399000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:46:21.866197    7852 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 10:46:21.866093    7852 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0729 10:46:21.873337    7852 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 10:46:21.878260    7852 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0729 10:46:21.878273    7852 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0729 10:46:21.878316    7852 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0729 10:46:21.878550    7852 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0729 10:46:21.878683    7852 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 10:46:21.881225    7852 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 10:46:21.881353    7852 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0729 10:46:21.881445    7852 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0729 10:46:21.891538    7852 start.go:159] libmachine.API.Create for "test-preload-399000" (driver="qemu2")
	I0729 10:46:21.891569    7852 client.go:168] LocalClient.Create starting
	I0729 10:46:21.891702    7852 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/ca.pem
	I0729 10:46:21.891740    7852 main.go:141] libmachine: Decoding PEM data...
	I0729 10:46:21.891750    7852 main.go:141] libmachine: Parsing certificate...
	I0729 10:46:21.891788    7852 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/cert.pem
	I0729 10:46:21.891811    7852 main.go:141] libmachine: Decoding PEM data...
	I0729 10:46:21.891820    7852 main.go:141] libmachine: Parsing certificate...
	I0729 10:46:21.892247    7852 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19339-6071/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0729 10:46:22.098577    7852 main.go:141] libmachine: Creating SSH key...
	I0729 10:46:22.191413    7852 main.go:141] libmachine: Creating Disk image...
	I0729 10:46:22.191436    7852 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 10:46:22.191666    7852 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/test-preload-399000/disk.qcow2.raw /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/test-preload-399000/disk.qcow2
	I0729 10:46:22.201525    7852 main.go:141] libmachine: STDOUT: 
	I0729 10:46:22.201541    7852 main.go:141] libmachine: STDERR: 
	I0729 10:46:22.201583    7852 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/test-preload-399000/disk.qcow2 +20000M
	I0729 10:46:22.209941    7852 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 10:46:22.209955    7852 main.go:141] libmachine: STDERR: 
	I0729 10:46:22.209964    7852 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/test-preload-399000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/test-preload-399000/disk.qcow2
	I0729 10:46:22.209967    7852 main.go:141] libmachine: Starting QEMU VM...
	I0729 10:46:22.209978    7852 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:46:22.210003    7852 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/test-preload-399000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/test-preload-399000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/test-preload-399000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:74:f3:67:f6:86 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/test-preload-399000/disk.qcow2
	I0729 10:46:22.211677    7852 main.go:141] libmachine: STDOUT: 
	I0729 10:46:22.211691    7852 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:46:22.211705    7852 client.go:171] duration metric: took 320.138375ms to LocalClient.Create
	I0729 10:46:22.261629    7852 cache.go:162] opening:  /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0729 10:46:22.268853    7852 cache.go:162] opening:  /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0729 10:46:22.281178    7852 cache.go:162] opening:  /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	W0729 10:46:22.336079    7852 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0729 10:46:22.336115    7852 cache.go:162] opening:  /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0729 10:46:22.380138    7852 cache.go:162] opening:  /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0729 10:46:22.388780    7852 cache.go:162] opening:  /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0729 10:46:22.423296    7852 cache.go:157] /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0729 10:46:22.423325    7852 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 557.6645ms
	I0729 10:46:22.423345    7852 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0729 10:46:22.656889    7852 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0729 10:46:22.656986    7852 cache.go:162] opening:  /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0729 10:46:22.854098    7852 cache.go:157] /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0729 10:46:22.854146    7852 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 988.650375ms
	I0729 10:46:22.854170    7852 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0729 10:46:23.617649    7852 cache.go:162] opening:  /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0729 10:46:23.824865    7852 cache.go:157] /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0729 10:46:23.824934    7852 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 1.959265833s
	I0729 10:46:23.824963    7852 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0729 10:46:24.211952    7852 start.go:128] duration metric: took 2.345773791s to createHost
	I0729 10:46:24.211998    7852 start.go:83] releasing machines lock for "test-preload-399000", held for 2.345929334s
	W0729 10:46:24.212068    7852 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:46:24.221303    7852 out.go:177] * Deleting "test-preload-399000" in qemu2 ...
	W0729 10:46:24.251436    7852 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:46:24.251468    7852 start.go:729] Will try again in 5 seconds ...
	I0729 10:46:24.546454    7852 cache.go:157] /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0729 10:46:24.546531    7852 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 2.680602458s
	I0729 10:46:24.546556    7852 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0729 10:46:26.132693    7852 cache.go:157] /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0729 10:46:26.132745    7852 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 4.267305708s
	I0729 10:46:26.132772    7852 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0729 10:46:26.149942    7852 cache.go:157] /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0729 10:46:26.149993    7852 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 4.284551875s
	I0729 10:46:26.150018    7852 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0729 10:46:29.074952    7852 cache.go:157] /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0729 10:46:29.075006    7852 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 7.209421417s
	I0729 10:46:29.075036    7852 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0729 10:46:29.252301    7852 start.go:360] acquireMachinesLock for test-preload-399000: {Name:mk9e50f97d2386769ca02400541d9347efa17b63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:46:29.252711    7852 start.go:364] duration metric: took 353.167µs to acquireMachinesLock for "test-preload-399000"
	I0729 10:46:29.252812    7852 start.go:93] Provisioning new machine with config: &{Name:test-preload-399000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-399000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:46:29.253054    7852 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 10:46:29.265521    7852 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 10:46:29.316028    7852 start.go:159] libmachine.API.Create for "test-preload-399000" (driver="qemu2")
	I0729 10:46:29.316074    7852 client.go:168] LocalClient.Create starting
	I0729 10:46:29.316208    7852 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/ca.pem
	I0729 10:46:29.316274    7852 main.go:141] libmachine: Decoding PEM data...
	I0729 10:46:29.316317    7852 main.go:141] libmachine: Parsing certificate...
	I0729 10:46:29.316384    7852 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/cert.pem
	I0729 10:46:29.316429    7852 main.go:141] libmachine: Decoding PEM data...
	I0729 10:46:29.316445    7852 main.go:141] libmachine: Parsing certificate...
	I0729 10:46:29.316992    7852 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19339-6071/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0729 10:46:29.476619    7852 main.go:141] libmachine: Creating SSH key...
	I0729 10:46:29.578122    7852 main.go:141] libmachine: Creating Disk image...
	I0729 10:46:29.578128    7852 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 10:46:29.578324    7852 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/test-preload-399000/disk.qcow2.raw /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/test-preload-399000/disk.qcow2
	I0729 10:46:29.587734    7852 main.go:141] libmachine: STDOUT: 
	I0729 10:46:29.587752    7852 main.go:141] libmachine: STDERR: 
	I0729 10:46:29.587808    7852 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/test-preload-399000/disk.qcow2 +20000M
	I0729 10:46:29.595696    7852 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 10:46:29.595710    7852 main.go:141] libmachine: STDERR: 
	I0729 10:46:29.595721    7852 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/test-preload-399000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/test-preload-399000/disk.qcow2
	I0729 10:46:29.595725    7852 main.go:141] libmachine: Starting QEMU VM...
	I0729 10:46:29.595738    7852 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:46:29.595774    7852 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/test-preload-399000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/test-preload-399000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/test-preload-399000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:b6:56:80:25:a3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/test-preload-399000/disk.qcow2
	I0729 10:46:29.597448    7852 main.go:141] libmachine: STDOUT: 
	I0729 10:46:29.597465    7852 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:46:29.597478    7852 client.go:171] duration metric: took 281.40225ms to LocalClient.Create
	I0729 10:46:30.783033    7852 cache.go:157] /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0729 10:46:30.783106    7852 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 8.917458291s
	I0729 10:46:30.783129    7852 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0729 10:46:30.783205    7852 cache.go:87] Successfully saved all images to host disk.
	I0729 10:46:31.599650    7852 start.go:128] duration metric: took 2.346600375s to createHost
	I0729 10:46:31.599710    7852 start.go:83] releasing machines lock for "test-preload-399000", held for 2.347014084s
	W0729 10:46:31.600058    7852 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-399000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-399000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:46:31.614592    7852 out.go:177] 
	W0729 10:46:31.618681    7852 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 10:46:31.618730    7852 out.go:239] * 
	* 
	W0729 10:46:31.621568    7852 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 10:46:31.633578    7852 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-399000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-07-29 10:46:31.652517 -0700 PDT m=+693.679575876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-399000 -n test-preload-399000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-399000 -n test-preload-399000: exit status 7 (67.88225ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-399000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-399000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-399000
--- FAIL: TestPreload (10.05s)

                                                
                                    
x
+
TestScheduledStopUnix (10.01s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-338000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-338000 --memory=2048 --driver=qemu2 : exit status 80 (9.85773125s)

                                                
                                                
-- stdout --
	* [scheduled-stop-338000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19339
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-338000" primary control-plane node in "scheduled-stop-338000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-338000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-338000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-338000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19339
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-338000" primary control-plane node in "scheduled-stop-338000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-338000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-338000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-07-29 10:46:41.653721 -0700 PDT m=+703.680948626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-338000 -n scheduled-stop-338000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-338000 -n scheduled-stop-338000: exit status 7 (68.33575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-338000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-338000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-338000
--- FAIL: TestScheduledStopUnix (10.01s)

                                                
                                    
x
+
TestSkaffold (12.09s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe3655606097 version
skaffold_test.go:59: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe3655606097 version: (1.069887792s)
skaffold_test.go:63: skaffold version: v2.13.1
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-136000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-136000 --memory=2600 --driver=qemu2 : exit status 80 (9.839955458s)

                                                
                                                
-- stdout --
	* [skaffold-136000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19339
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-136000" primary control-plane node in "skaffold-136000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-136000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-136000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-136000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19339
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-136000" primary control-plane node in "skaffold-136000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-136000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-136000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-07-29 10:46:53.749789 -0700 PDT m=+715.777220417
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-136000 -n skaffold-136000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-136000 -n skaffold-136000: exit status 7 (63.890458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-136000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-136000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-136000
--- FAIL: TestSkaffold (12.09s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (586.59s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.4122560816 start -p running-upgrade-504000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.4122560816 start -p running-upgrade-504000 --memory=2200 --vm-driver=qemu2 : (50.044109709s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-504000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-504000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m23.164940042s)

                                                
                                                
-- stdout --
	* [running-upgrade-504000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19339
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-504000" primary control-plane node in "running-upgrade-504000" cluster
	* Updating the running qemu2 "running-upgrade-504000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:48:25.776597    8229 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:48:25.776714    8229 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:48:25.776717    8229 out.go:304] Setting ErrFile to fd 2...
	I0729 10:48:25.776720    8229 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:48:25.776844    8229 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 10:48:25.777833    8229 out.go:298] Setting JSON to false
	I0729 10:48:25.794690    8229 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4674,"bootTime":1722270631,"procs":452,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 10:48:25.794806    8229 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 10:48:25.799967    8229 out.go:177] * [running-upgrade-504000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 10:48:25.806899    8229 out.go:177]   - MINIKUBE_LOCATION=19339
	I0729 10:48:25.806953    8229 notify.go:220] Checking for updates...
	I0729 10:48:25.814880    8229 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	I0729 10:48:25.818861    8229 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 10:48:25.822934    8229 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:48:25.825885    8229 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	I0729 10:48:25.828897    8229 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 10:48:25.832133    8229 config.go:182] Loaded profile config "running-upgrade-504000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 10:48:25.834825    8229 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 10:48:25.837885    8229 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:48:25.841909    8229 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 10:48:25.848901    8229 start.go:297] selected driver: qemu2
	I0729 10:48:25.848907    8229 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-504000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51249 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-504000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0729 10:48:25.848995    8229 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:48:25.851545    8229 cni.go:84] Creating CNI manager for ""
	I0729 10:48:25.851565    8229 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 10:48:25.851590    8229 start.go:340] cluster config:
	{Name:running-upgrade-504000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51249 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-504000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0729 10:48:25.851642    8229 iso.go:125] acquiring lock: {Name:mk2808e0b9510c77af2c0862d3450f3cc996acba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:48:25.858843    8229 out.go:177] * Starting "running-upgrade-504000" primary control-plane node in "running-upgrade-504000" cluster
	I0729 10:48:25.862871    8229 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0729 10:48:25.862886    8229 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0729 10:48:25.862895    8229 cache.go:56] Caching tarball of preloaded images
	I0729 10:48:25.862946    8229 preload.go:172] Found /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 10:48:25.862952    8229 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0729 10:48:25.863024    8229 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/running-upgrade-504000/config.json ...
	I0729 10:48:25.863542    8229 start.go:360] acquireMachinesLock for running-upgrade-504000: {Name:mk9e50f97d2386769ca02400541d9347efa17b63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:48:25.863578    8229 start.go:364] duration metric: took 29.625µs to acquireMachinesLock for "running-upgrade-504000"
	I0729 10:48:25.863589    8229 start.go:96] Skipping create...Using existing machine configuration
	I0729 10:48:25.863593    8229 fix.go:54] fixHost starting: 
	I0729 10:48:25.864246    8229 fix.go:112] recreateIfNeeded on running-upgrade-504000: state=Running err=<nil>
	W0729 10:48:25.864258    8229 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 10:48:25.872887    8229 out.go:177] * Updating the running qemu2 "running-upgrade-504000" VM ...
	I0729 10:48:25.876846    8229 machine.go:94] provisionDockerMachine start ...
	I0729 10:48:25.876891    8229 main.go:141] libmachine: Using SSH client type: native
	I0729 10:48:25.877003    8229 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100ecea10] 0x100ed1270 <nil>  [] 0s} localhost 51217 <nil> <nil>}
	I0729 10:48:25.877008    8229 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 10:48:25.930705    8229 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-504000
	
	I0729 10:48:25.930725    8229 buildroot.go:166] provisioning hostname "running-upgrade-504000"
	I0729 10:48:25.930792    8229 main.go:141] libmachine: Using SSH client type: native
	I0729 10:48:25.930925    8229 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100ecea10] 0x100ed1270 <nil>  [] 0s} localhost 51217 <nil> <nil>}
	I0729 10:48:25.930930    8229 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-504000 && echo "running-upgrade-504000" | sudo tee /etc/hostname
	I0729 10:48:25.988482    8229 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-504000
	
	I0729 10:48:25.988535    8229 main.go:141] libmachine: Using SSH client type: native
	I0729 10:48:25.988657    8229 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100ecea10] 0x100ed1270 <nil>  [] 0s} localhost 51217 <nil> <nil>}
	I0729 10:48:25.988665    8229 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-504000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-504000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-504000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 10:48:26.042183    8229 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 10:48:26.042195    8229 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19339-6071/.minikube CaCertPath:/Users/jenkins/minikube-integration/19339-6071/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19339-6071/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19339-6071/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19339-6071/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19339-6071/.minikube}
	I0729 10:48:26.042207    8229 buildroot.go:174] setting up certificates
	I0729 10:48:26.042212    8229 provision.go:84] configureAuth start
	I0729 10:48:26.042218    8229 provision.go:143] copyHostCerts
	I0729 10:48:26.042305    8229 exec_runner.go:144] found /Users/jenkins/minikube-integration/19339-6071/.minikube/ca.pem, removing ...
	I0729 10:48:26.042311    8229 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19339-6071/.minikube/ca.pem
	I0729 10:48:26.042456    8229 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19339-6071/.minikube/ca.pem (1078 bytes)
	I0729 10:48:26.042643    8229 exec_runner.go:144] found /Users/jenkins/minikube-integration/19339-6071/.minikube/cert.pem, removing ...
	I0729 10:48:26.042646    8229 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19339-6071/.minikube/cert.pem
	I0729 10:48:26.042697    8229 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19339-6071/.minikube/cert.pem (1123 bytes)
	I0729 10:48:26.042801    8229 exec_runner.go:144] found /Users/jenkins/minikube-integration/19339-6071/.minikube/key.pem, removing ...
	I0729 10:48:26.042804    8229 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19339-6071/.minikube/key.pem
	I0729 10:48:26.042849    8229 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19339-6071/.minikube/key.pem (1675 bytes)
	I0729 10:48:26.042941    8229 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19339-6071/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19339-6071/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-504000 san=[127.0.0.1 localhost minikube running-upgrade-504000]
	I0729 10:48:26.089827    8229 provision.go:177] copyRemoteCerts
	I0729 10:48:26.089877    8229 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 10:48:26.089886    8229 sshutil.go:53] new ssh client: &{IP:localhost Port:51217 SSHKeyPath:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/running-upgrade-504000/id_rsa Username:docker}
	I0729 10:48:26.121670    8229 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 10:48:26.128138    8229 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 10:48:26.135315    8229 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0729 10:48:26.142152    8229 provision.go:87] duration metric: took 99.937042ms to configureAuth
	I0729 10:48:26.142162    8229 buildroot.go:189] setting minikube options for container-runtime
	I0729 10:48:26.142269    8229 config.go:182] Loaded profile config "running-upgrade-504000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 10:48:26.142307    8229 main.go:141] libmachine: Using SSH client type: native
	I0729 10:48:26.142395    8229 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100ecea10] 0x100ed1270 <nil>  [] 0s} localhost 51217 <nil> <nil>}
	I0729 10:48:26.142406    8229 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0729 10:48:26.193621    8229 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0729 10:48:26.193630    8229 buildroot.go:70] root file system type: tmpfs
	I0729 10:48:26.193678    8229 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0729 10:48:26.193736    8229 main.go:141] libmachine: Using SSH client type: native
	I0729 10:48:26.193856    8229 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100ecea10] 0x100ed1270 <nil>  [] 0s} localhost 51217 <nil> <nil>}
	I0729 10:48:26.193889    8229 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0729 10:48:26.253695    8229 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0729 10:48:26.253751    8229 main.go:141] libmachine: Using SSH client type: native
	I0729 10:48:26.253872    8229 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100ecea10] 0x100ed1270 <nil>  [] 0s} localhost 51217 <nil> <nil>}
	I0729 10:48:26.253880    8229 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0729 10:48:26.307896    8229 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 10:48:26.307908    8229 machine.go:97] duration metric: took 431.063833ms to provisionDockerMachine
	I0729 10:48:26.307914    8229 start.go:293] postStartSetup for "running-upgrade-504000" (driver="qemu2")
	I0729 10:48:26.307920    8229 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 10:48:26.307966    8229 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 10:48:26.307976    8229 sshutil.go:53] new ssh client: &{IP:localhost Port:51217 SSHKeyPath:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/running-upgrade-504000/id_rsa Username:docker}
	I0729 10:48:26.340782    8229 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 10:48:26.342109    8229 info.go:137] Remote host: Buildroot 2021.02.12
	I0729 10:48:26.342118    8229 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19339-6071/.minikube/addons for local assets ...
	I0729 10:48:26.342207    8229 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19339-6071/.minikube/files for local assets ...
	I0729 10:48:26.342328    8229 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19339-6071/.minikube/files/etc/ssl/certs/65432.pem -> 65432.pem in /etc/ssl/certs
	I0729 10:48:26.342466    8229 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 10:48:26.345161    8229 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19339-6071/.minikube/files/etc/ssl/certs/65432.pem --> /etc/ssl/certs/65432.pem (1708 bytes)
	I0729 10:48:26.352226    8229 start.go:296] duration metric: took 44.308208ms for postStartSetup
	I0729 10:48:26.352242    8229 fix.go:56] duration metric: took 488.656375ms for fixHost
	I0729 10:48:26.352272    8229 main.go:141] libmachine: Using SSH client type: native
	I0729 10:48:26.352373    8229 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100ecea10] 0x100ed1270 <nil>  [] 0s} localhost 51217 <nil> <nil>}
	I0729 10:48:26.352378    8229 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0729 10:48:26.405610    8229 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722275306.493521055
	
	I0729 10:48:26.405620    8229 fix.go:216] guest clock: 1722275306.493521055
	I0729 10:48:26.405624    8229 fix.go:229] Guest: 2024-07-29 10:48:26.493521055 -0700 PDT Remote: 2024-07-29 10:48:26.352243 -0700 PDT m=+0.596104751 (delta=141.278055ms)
	I0729 10:48:26.405637    8229 fix.go:200] guest clock delta is within tolerance: 141.278055ms
	I0729 10:48:26.405639    8229 start.go:83] releasing machines lock for "running-upgrade-504000", held for 542.065833ms
	I0729 10:48:26.405706    8229 ssh_runner.go:195] Run: cat /version.json
	I0729 10:48:26.405715    8229 sshutil.go:53] new ssh client: &{IP:localhost Port:51217 SSHKeyPath:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/running-upgrade-504000/id_rsa Username:docker}
	I0729 10:48:26.405706    8229 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 10:48:26.405747    8229 sshutil.go:53] new ssh client: &{IP:localhost Port:51217 SSHKeyPath:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/running-upgrade-504000/id_rsa Username:docker}
	W0729 10:48:26.406280    8229 sshutil.go:64] dial failure (will retry): dial tcp [::1]:51217: connect: connection refused
	I0729 10:48:26.406300    8229 retry.go:31] will retry after 225.545679ms: dial tcp [::1]:51217: connect: connection refused
	W0729 10:48:26.433493    8229 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0729 10:48:26.433558    8229 ssh_runner.go:195] Run: systemctl --version
	I0729 10:48:26.436048    8229 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 10:48:26.437706    8229 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 10:48:26.437741    8229 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0729 10:48:26.440948    8229 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0729 10:48:26.445574    8229 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 10:48:26.445580    8229 start.go:495] detecting cgroup driver to use...
	I0729 10:48:26.445687    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 10:48:26.451012    8229 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0729 10:48:26.453947    8229 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0729 10:48:26.457385    8229 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0729 10:48:26.457415    8229 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0729 10:48:26.460563    8229 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0729 10:48:26.463481    8229 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0729 10:48:26.467166    8229 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0729 10:48:26.469947    8229 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 10:48:26.473204    8229 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0729 10:48:26.476737    8229 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0729 10:48:26.480193    8229 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0729 10:48:26.483380    8229 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 10:48:26.485956    8229 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 10:48:26.489084    8229 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 10:48:26.568029    8229 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0729 10:48:26.578340    8229 start.go:495] detecting cgroup driver to use...
	I0729 10:48:26.578407    8229 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0729 10:48:26.583751    8229 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 10:48:26.589348    8229 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 10:48:26.595238    8229 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 10:48:26.599991    8229 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0729 10:48:26.604434    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 10:48:26.609942    8229 ssh_runner.go:195] Run: which cri-dockerd
	I0729 10:48:26.611308    8229 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0729 10:48:26.613864    8229 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0729 10:48:26.619368    8229 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0729 10:48:26.697784    8229 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0729 10:48:26.780374    8229 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0729 10:48:26.780435    8229 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0729 10:48:26.786938    8229 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 10:48:26.865450    8229 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0729 10:48:29.837424    8229 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.972006708s)
	I0729 10:48:29.837493    8229 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0729 10:48:29.841921    8229 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0729 10:48:29.848685    8229 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0729 10:48:29.853586    8229 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0729 10:48:29.918370    8229 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0729 10:48:29.986746    8229 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 10:48:30.049815    8229 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0729 10:48:30.055643    8229 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0729 10:48:30.060008    8229 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 10:48:30.128554    8229 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0729 10:48:30.168223    8229 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0729 10:48:30.168310    8229 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0729 10:48:30.170359    8229 start.go:563] Will wait 60s for crictl version
	I0729 10:48:30.170427    8229 ssh_runner.go:195] Run: which crictl
	I0729 10:48:30.171821    8229 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 10:48:30.183215    8229 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0729 10:48:30.183288    8229 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0729 10:48:30.195588    8229 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0729 10:48:30.215053    8229 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0729 10:48:30.215175    8229 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0729 10:48:30.216457    8229 kubeadm.go:883] updating cluster {Name:running-upgrade-504000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51249 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-504000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0729 10:48:30.216509    8229 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0729 10:48:30.216550    8229 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0729 10:48:30.227438    8229 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0729 10:48:30.227446    8229 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0729 10:48:30.227496    8229 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0729 10:48:30.230370    8229 ssh_runner.go:195] Run: which lz4
	I0729 10:48:30.231627    8229 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0729 10:48:30.232957    8229 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 10:48:30.232970    8229 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0729 10:48:31.101700    8229 docker.go:649] duration metric: took 870.130459ms to copy over tarball
	I0729 10:48:31.101761    8229 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 10:48:32.267930    8229 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.166175208s)
	I0729 10:48:32.267945    8229 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 10:48:32.283637    8229 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0729 10:48:32.286647    8229 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0729 10:48:32.292050    8229 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 10:48:32.356726    8229 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0729 10:48:33.553637    8229 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.196913583s)
	I0729 10:48:33.553727    8229 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0729 10:48:33.564904    8229 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0729 10:48:33.564913    8229 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0729 10:48:33.564919    8229 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 10:48:33.568666    8229 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 10:48:33.570366    8229 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0729 10:48:33.572526    8229 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0729 10:48:33.572573    8229 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 10:48:33.574591    8229 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 10:48:33.574688    8229 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0729 10:48:33.576063    8229 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0729 10:48:33.576083    8229 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0729 10:48:33.577014    8229 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 10:48:33.577496    8229 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0729 10:48:33.578033    8229 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0729 10:48:33.578687    8229 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0729 10:48:33.579699    8229 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 10:48:33.580265    8229 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0729 10:48:33.580621    8229 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0729 10:48:33.581567    8229 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 10:48:34.007782    8229 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0729 10:48:34.012292    8229 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 10:48:34.013330    8229 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0729 10:48:34.025784    8229 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0729 10:48:34.028044    8229 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0729 10:48:34.028087    8229 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0729 10:48:34.028115    8229 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0729 10:48:34.035549    8229 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0729 10:48:34.038204    8229 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0729 10:48:34.038224    8229 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 10:48:34.038264    8229 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 10:48:34.043657    8229 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0729 10:48:34.048089    8229 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0729 10:48:34.048107    8229 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0729 10:48:34.048160    8229 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0729 10:48:34.059149    8229 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0729 10:48:34.060848    8229 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0729 10:48:34.060865    8229 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0729 10:48:34.060912    8229 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	W0729 10:48:34.076953    8229 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0729 10:48:34.077083    8229 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0729 10:48:34.078822    8229 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0729 10:48:34.078909    8229 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0729 10:48:34.078922    8229 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0729 10:48:34.078948    8229 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0729 10:48:34.084264    8229 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0729 10:48:34.084288    8229 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0729 10:48:34.084322    8229 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0729 10:48:34.084339    8229 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0729 10:48:34.093861    8229 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0729 10:48:34.102798    8229 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0729 10:48:34.102810    8229 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0729 10:48:34.102818    8229 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 10:48:34.102859    8229 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0729 10:48:34.102922    8229 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0729 10:48:34.110696    8229 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0729 10:48:34.110820    8229 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0729 10:48:34.115036    8229 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0729 10:48:34.115061    8229 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0729 10:48:34.115073    8229 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0729 10:48:34.115109    8229 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0729 10:48:34.115115    8229 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0729 10:48:34.115140    8229 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0729 10:48:34.122959    8229 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0729 10:48:34.122985    8229 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0729 10:48:34.142164    8229 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0729 10:48:34.142183    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	W0729 10:48:34.216809    8229 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0729 10:48:34.216914    8229 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 10:48:34.240474    8229 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0729 10:48:34.240497    8229 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0729 10:48:34.240509    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0729 10:48:34.256213    8229 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0729 10:48:34.256234    8229 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 10:48:34.256291    8229 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 10:48:34.392559    8229 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0729 10:48:34.465457    8229 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0729 10:48:34.465471    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0729 10:48:35.656022    8229 ssh_runner.go:235] Completed: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.399720833s)
	I0729 10:48:35.656071    8229 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0729 10:48:35.656036    8229 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load": (1.190570667s)
	I0729 10:48:35.656111    8229 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0729 10:48:35.656537    8229 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0729 10:48:35.661731    8229 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0729 10:48:35.661814    8229 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0729 10:48:35.722615    8229 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0729 10:48:35.722631    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0729 10:48:35.954844    8229 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0729 10:48:35.954882    8229 cache_images.go:92] duration metric: took 2.38999625s to LoadCachedImages
	W0729 10:48:35.954925    8229 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0729 10:48:35.954931    8229 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0729 10:48:35.954997    8229 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-504000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-504000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 10:48:35.955064    8229 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0729 10:48:35.968332    8229 cni.go:84] Creating CNI manager for ""
	I0729 10:48:35.968344    8229 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 10:48:35.968349    8229 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 10:48:35.968358    8229 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-504000 NodeName:running-upgrade-504000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 10:48:35.968422    8229 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-504000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 10:48:35.968481    8229 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0729 10:48:35.972311    8229 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 10:48:35.972341    8229 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 10:48:35.975581    8229 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0729 10:48:35.980574    8229 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 10:48:35.985585    8229 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0729 10:48:35.990811    8229 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0729 10:48:35.992219    8229 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 10:48:36.070988    8229 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 10:48:36.076242    8229 certs.go:68] Setting up /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/running-upgrade-504000 for IP: 10.0.2.15
	I0729 10:48:36.076248    8229 certs.go:194] generating shared ca certs ...
	I0729 10:48:36.076257    8229 certs.go:226] acquiring lock for ca certs: {Name:mkd86fdb55ccc20c129297fd51f66c0e2f8e203c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:48:36.076494    8229 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19339-6071/.minikube/ca.key
	I0729 10:48:36.076541    8229 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19339-6071/.minikube/proxy-client-ca.key
	I0729 10:48:36.076546    8229 certs.go:256] generating profile certs ...
	I0729 10:48:36.076602    8229 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/running-upgrade-504000/client.key
	I0729 10:48:36.076614    8229 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/running-upgrade-504000/apiserver.key.b66eed3b
	I0729 10:48:36.076624    8229 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/running-upgrade-504000/apiserver.crt.b66eed3b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0729 10:48:36.216110    8229 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/running-upgrade-504000/apiserver.crt.b66eed3b ...
	I0729 10:48:36.216122    8229 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/running-upgrade-504000/apiserver.crt.b66eed3b: {Name:mk3c61b698e6107987080b423fa24f4a6f5cc584 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:48:36.216404    8229 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/running-upgrade-504000/apiserver.key.b66eed3b ...
	I0729 10:48:36.216409    8229 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/running-upgrade-504000/apiserver.key.b66eed3b: {Name:mkb4a689ef9e84a9b00ed48236aca85355e5890d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:48:36.216525    8229 certs.go:381] copying /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/running-upgrade-504000/apiserver.crt.b66eed3b -> /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/running-upgrade-504000/apiserver.crt
	I0729 10:48:36.216678    8229 certs.go:385] copying /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/running-upgrade-504000/apiserver.key.b66eed3b -> /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/running-upgrade-504000/apiserver.key
	I0729 10:48:36.216827    8229 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/running-upgrade-504000/proxy-client.key
	I0729 10:48:36.216956    8229 certs.go:484] found cert: /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/6543.pem (1338 bytes)
	W0729 10:48:36.216988    8229 certs.go:480] ignoring /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/6543_empty.pem, impossibly tiny 0 bytes
	I0729 10:48:36.216994    8229 certs.go:484] found cert: /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 10:48:36.217013    8229 certs.go:484] found cert: /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/ca.pem (1078 bytes)
	I0729 10:48:36.217034    8229 certs.go:484] found cert: /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/cert.pem (1123 bytes)
	I0729 10:48:36.217053    8229 certs.go:484] found cert: /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/key.pem (1675 bytes)
	I0729 10:48:36.217093    8229 certs.go:484] found cert: /Users/jenkins/minikube-integration/19339-6071/.minikube/files/etc/ssl/certs/65432.pem (1708 bytes)
	I0729 10:48:36.217446    8229 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19339-6071/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 10:48:36.225129    8229 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19339-6071/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 10:48:36.232345    8229 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19339-6071/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 10:48:36.239998    8229 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19339-6071/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 10:48:36.247358    8229 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/running-upgrade-504000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0729 10:48:36.254071    8229 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/running-upgrade-504000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 10:48:36.263400    8229 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/running-upgrade-504000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 10:48:36.309915    8229 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/running-upgrade-504000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 10:48:36.333394    8229 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19339-6071/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 10:48:36.342479    8229 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/6543.pem --> /usr/share/ca-certificates/6543.pem (1338 bytes)
	I0729 10:48:36.351811    8229 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19339-6071/.minikube/files/etc/ssl/certs/65432.pem --> /usr/share/ca-certificates/65432.pem (1708 bytes)
	I0729 10:48:36.359320    8229 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 10:48:36.372657    8229 ssh_runner.go:195] Run: openssl version
	I0729 10:48:36.379448    8229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 10:48:36.385228    8229 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 10:48:36.387004    8229 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 17:48 /usr/share/ca-certificates/minikubeCA.pem
	I0729 10:48:36.387042    8229 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 10:48:36.392561    8229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 10:48:36.396688    8229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6543.pem && ln -fs /usr/share/ca-certificates/6543.pem /etc/ssl/certs/6543.pem"
	I0729 10:48:36.401859    8229 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6543.pem
	I0729 10:48:36.405654    8229 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 17:36 /usr/share/ca-certificates/6543.pem
	I0729 10:48:36.405700    8229 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6543.pem
	I0729 10:48:36.411108    8229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6543.pem /etc/ssl/certs/51391683.0"
	I0729 10:48:36.422414    8229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/65432.pem && ln -fs /usr/share/ca-certificates/65432.pem /etc/ssl/certs/65432.pem"
	I0729 10:48:36.429172    8229 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/65432.pem
	I0729 10:48:36.438354    8229 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 17:36 /usr/share/ca-certificates/65432.pem
	I0729 10:48:36.438402    8229 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/65432.pem
	I0729 10:48:36.446344    8229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/65432.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 10:48:36.455389    8229 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 10:48:36.468166    8229 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 10:48:36.482560    8229 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 10:48:36.488218    8229 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 10:48:36.495544    8229 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 10:48:36.501909    8229 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 10:48:36.505765    8229 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 10:48:36.510944    8229 kubeadm.go:392] StartCluster: {Name:running-upgrade-504000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51249 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-504000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0729 10:48:36.511052    8229 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0729 10:48:36.547514    8229 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 10:48:36.559787    8229 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 10:48:36.559797    8229 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 10:48:36.559838    8229 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 10:48:36.565764    8229 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 10:48:36.565816    8229 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-504000" does not appear in /Users/jenkins/minikube-integration/19339-6071/kubeconfig
	I0729 10:48:36.565837    8229 kubeconfig.go:62] /Users/jenkins/minikube-integration/19339-6071/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-504000" cluster setting kubeconfig missing "running-upgrade-504000" context setting]
	I0729 10:48:36.566036    8229 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19339-6071/kubeconfig: {Name:mkf75fdff2d3e918223b7f2dbeb4359c01007a16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:48:36.566793    8229 kapi.go:59] client config for running-upgrade-504000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/running-upgrade-504000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/running-upgrade-504000/client.key", CAFile:"/Users/jenkins/minikube-integration/19339-6071/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x102264080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 10:48:36.567724    8229 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 10:48:36.575488    8229 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-504000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0729 10:48:36.575498    8229 kubeadm.go:1160] stopping kube-system containers ...
	I0729 10:48:36.575567    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0729 10:48:36.605890    8229 docker.go:483] Stopping containers: [7f8a2a78ad8d 196d52ea1dae b1cb7d438745 7e08c7a8aed1 37e5b0138e9b 94e5a5cc7096 25d154e8a937 931ffda3e7cf 3e59f95c1ca0 87d43b7d580e 1efc35cf636d d505b10f5e2e 8a944518b505 d24aee85e800 e2ad7fc436ff 15591d33bf7b fcc288561de2 e297451d4163]
	I0729 10:48:36.605964    8229 ssh_runner.go:195] Run: docker stop 7f8a2a78ad8d 196d52ea1dae b1cb7d438745 7e08c7a8aed1 37e5b0138e9b 94e5a5cc7096 25d154e8a937 931ffda3e7cf 3e59f95c1ca0 87d43b7d580e 1efc35cf636d d505b10f5e2e 8a944518b505 d24aee85e800 e2ad7fc436ff 15591d33bf7b fcc288561de2 e297451d4163
	I0729 10:48:36.831771    8229 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 10:48:36.911823    8229 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 10:48:36.915225    8229 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5639 Jul 29 17:48 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Jul 29 17:48 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Jul 29 17:48 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5597 Jul 29 17:48 /etc/kubernetes/scheduler.conf
	
	I0729 10:48:36.915267    8229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51249 /etc/kubernetes/admin.conf
	I0729 10:48:36.918061    8229 kubeadm.go:163] "https://control-plane.minikube.internal:51249" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51249 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0729 10:48:36.918091    8229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 10:48:36.920876    8229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51249 /etc/kubernetes/kubelet.conf
	I0729 10:48:36.923597    8229 kubeadm.go:163] "https://control-plane.minikube.internal:51249" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51249 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0729 10:48:36.923622    8229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 10:48:36.926588    8229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51249 /etc/kubernetes/controller-manager.conf
	I0729 10:48:36.929671    8229 kubeadm.go:163] "https://control-plane.minikube.internal:51249" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51249 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0729 10:48:36.929696    8229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 10:48:36.933054    8229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51249 /etc/kubernetes/scheduler.conf
	I0729 10:48:36.935760    8229 kubeadm.go:163] "https://control-plane.minikube.internal:51249" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51249 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0729 10:48:36.935782    8229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 10:48:36.938505    8229 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 10:48:36.941250    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 10:48:36.974662    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 10:48:37.517964    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 10:48:37.704817    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 10:48:37.728957    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 10:48:37.752860    8229 api_server.go:52] waiting for apiserver process to appear ...
	I0729 10:48:37.752939    8229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 10:48:37.758008    8229 api_server.go:72] duration metric: took 5.148459ms to wait for apiserver process to appear ...
	I0729 10:48:37.758020    8229 api_server.go:88] waiting for apiserver healthz status ...
	I0729 10:48:37.758029    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:48:42.760051    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:48:42.760071    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:48:47.760298    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:48:47.760365    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:48:52.761023    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:48:52.761141    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:48:57.762119    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:48:57.762147    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:49:02.762886    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:49:02.762958    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:49:07.764180    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:49:07.764268    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:49:12.766056    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:49:12.766136    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:49:17.768295    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:49:17.768372    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:49:22.770938    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:49:22.771039    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:49:27.773634    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:49:27.773707    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:49:32.776191    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:49:32.776276    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:49:37.778893    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:49:37.779246    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:49:37.810985    8229 logs.go:276] 2 containers: [8c4ad5249bc8 90622bb860e2]
	I0729 10:49:37.811109    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:49:37.829550    8229 logs.go:276] 2 containers: [c3e1f9023336 c4b3c8945276]
	I0729 10:49:37.829653    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:49:37.847295    8229 logs.go:276] 1 containers: [92f05bbf9ced]
	I0729 10:49:37.847375    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:49:37.858629    8229 logs.go:276] 2 containers: [7cc1c8aea7f7 6565a6abc140]
	I0729 10:49:37.858697    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:49:37.875097    8229 logs.go:276] 1 containers: [7243039f43b7]
	I0729 10:49:37.875170    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:49:37.889687    8229 logs.go:276] 2 containers: [4aa9b4b13ef3 87d43b7d580e]
	I0729 10:49:37.889753    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:49:37.899721    8229 logs.go:276] 0 containers: []
	W0729 10:49:37.899732    8229 logs.go:278] No container was found matching "kindnet"
	I0729 10:49:37.899779    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:49:37.910544    8229 logs.go:276] 1 containers: [fcf6defc29a4]
	I0729 10:49:37.910561    8229 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:49:37.910567    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:49:37.984661    8229 logs.go:123] Gathering logs for kube-controller-manager [87d43b7d580e] ...
	I0729 10:49:37.984676    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87d43b7d580e"
	I0729 10:49:38.000331    8229 logs.go:123] Gathering logs for Docker ...
	I0729 10:49:38.000345    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:49:38.027480    8229 logs.go:123] Gathering logs for container status ...
	I0729 10:49:38.027488    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:49:38.039330    8229 logs.go:123] Gathering logs for kubelet ...
	I0729 10:49:38.039342    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:49:38.075215    8229 logs.go:123] Gathering logs for kube-apiserver [90622bb860e2] ...
	I0729 10:49:38.075223    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90622bb860e2"
	I0729 10:49:38.096600    8229 logs.go:123] Gathering logs for kube-proxy [7243039f43b7] ...
	I0729 10:49:38.096610    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7243039f43b7"
	I0729 10:49:38.108207    8229 logs.go:123] Gathering logs for storage-provisioner [fcf6defc29a4] ...
	I0729 10:49:38.108229    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcf6defc29a4"
	I0729 10:49:38.120460    8229 logs.go:123] Gathering logs for kube-controller-manager [4aa9b4b13ef3] ...
	I0729 10:49:38.120470    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4aa9b4b13ef3"
	I0729 10:49:38.137340    8229 logs.go:123] Gathering logs for kube-apiserver [8c4ad5249bc8] ...
	I0729 10:49:38.137352    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c4ad5249bc8"
	I0729 10:49:38.151667    8229 logs.go:123] Gathering logs for etcd [c4b3c8945276] ...
	I0729 10:49:38.151680    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4b3c8945276"
	I0729 10:49:38.164681    8229 logs.go:123] Gathering logs for coredns [92f05bbf9ced] ...
	I0729 10:49:38.164693    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92f05bbf9ced"
	I0729 10:49:38.176056    8229 logs.go:123] Gathering logs for kube-scheduler [7cc1c8aea7f7] ...
	I0729 10:49:38.176075    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc1c8aea7f7"
	I0729 10:49:38.187957    8229 logs.go:123] Gathering logs for dmesg ...
	I0729 10:49:38.187967    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:49:38.192677    8229 logs.go:123] Gathering logs for etcd [c3e1f9023336] ...
	I0729 10:49:38.192683    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3e1f9023336"
	I0729 10:49:38.206573    8229 logs.go:123] Gathering logs for kube-scheduler [6565a6abc140] ...
	I0729 10:49:38.206586    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6565a6abc140"
	I0729 10:49:40.728964    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:49:45.731660    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:49:45.731965    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:49:45.759673    8229 logs.go:276] 2 containers: [8c4ad5249bc8 90622bb860e2]
	I0729 10:49:45.759789    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:49:45.777373    8229 logs.go:276] 2 containers: [c3e1f9023336 c4b3c8945276]
	I0729 10:49:45.777460    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:49:45.793651    8229 logs.go:276] 1 containers: [92f05bbf9ced]
	I0729 10:49:45.793721    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:49:45.804685    8229 logs.go:276] 2 containers: [7cc1c8aea7f7 6565a6abc140]
	I0729 10:49:45.804751    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:49:45.814916    8229 logs.go:276] 1 containers: [7243039f43b7]
	I0729 10:49:45.814975    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:49:45.825515    8229 logs.go:276] 2 containers: [4aa9b4b13ef3 87d43b7d580e]
	I0729 10:49:45.825575    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:49:45.838810    8229 logs.go:276] 0 containers: []
	W0729 10:49:45.838823    8229 logs.go:278] No container was found matching "kindnet"
	I0729 10:49:45.838885    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:49:45.849322    8229 logs.go:276] 1 containers: [fcf6defc29a4]
	I0729 10:49:45.849340    8229 logs.go:123] Gathering logs for kube-apiserver [90622bb860e2] ...
	I0729 10:49:45.849345    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90622bb860e2"
	I0729 10:49:45.871384    8229 logs.go:123] Gathering logs for kube-scheduler [6565a6abc140] ...
	I0729 10:49:45.871394    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6565a6abc140"
	I0729 10:49:45.883791    8229 logs.go:123] Gathering logs for kube-proxy [7243039f43b7] ...
	I0729 10:49:45.883803    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7243039f43b7"
	I0729 10:49:45.895397    8229 logs.go:123] Gathering logs for storage-provisioner [fcf6defc29a4] ...
	I0729 10:49:45.895408    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcf6defc29a4"
	I0729 10:49:45.906914    8229 logs.go:123] Gathering logs for dmesg ...
	I0729 10:49:45.906928    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:49:45.911943    8229 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:49:45.911951    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:49:45.947720    8229 logs.go:123] Gathering logs for etcd [c4b3c8945276] ...
	I0729 10:49:45.947734    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4b3c8945276"
	I0729 10:49:45.961329    8229 logs.go:123] Gathering logs for kube-scheduler [7cc1c8aea7f7] ...
	I0729 10:49:45.961341    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc1c8aea7f7"
	I0729 10:49:45.972877    8229 logs.go:123] Gathering logs for kube-controller-manager [4aa9b4b13ef3] ...
	I0729 10:49:45.972886    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4aa9b4b13ef3"
	I0729 10:49:45.989830    8229 logs.go:123] Gathering logs for kubelet ...
	I0729 10:49:45.989844    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:49:46.024897    8229 logs.go:123] Gathering logs for coredns [92f05bbf9ced] ...
	I0729 10:49:46.024905    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92f05bbf9ced"
	I0729 10:49:46.035909    8229 logs.go:123] Gathering logs for kube-controller-manager [87d43b7d580e] ...
	I0729 10:49:46.035919    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87d43b7d580e"
	I0729 10:49:46.047041    8229 logs.go:123] Gathering logs for container status ...
	I0729 10:49:46.047053    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:49:46.059461    8229 logs.go:123] Gathering logs for kube-apiserver [8c4ad5249bc8] ...
	I0729 10:49:46.059473    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c4ad5249bc8"
	I0729 10:49:46.077949    8229 logs.go:123] Gathering logs for etcd [c3e1f9023336] ...
	I0729 10:49:46.077961    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3e1f9023336"
	I0729 10:49:46.091605    8229 logs.go:123] Gathering logs for Docker ...
	I0729 10:49:46.091617    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:49:48.620908    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:49:53.623222    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:49:53.623708    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:49:53.663177    8229 logs.go:276] 2 containers: [8c4ad5249bc8 90622bb860e2]
	I0729 10:49:53.663316    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:49:53.687317    8229 logs.go:276] 2 containers: [c3e1f9023336 c4b3c8945276]
	I0729 10:49:53.687416    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:49:53.702618    8229 logs.go:276] 1 containers: [92f05bbf9ced]
	I0729 10:49:53.702697    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:49:53.714636    8229 logs.go:276] 2 containers: [7cc1c8aea7f7 6565a6abc140]
	I0729 10:49:53.714712    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:49:53.732278    8229 logs.go:276] 1 containers: [7243039f43b7]
	I0729 10:49:53.732358    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:49:53.742411    8229 logs.go:276] 2 containers: [4aa9b4b13ef3 87d43b7d580e]
	I0729 10:49:53.742483    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:49:53.752175    8229 logs.go:276] 0 containers: []
	W0729 10:49:53.752185    8229 logs.go:278] No container was found matching "kindnet"
	I0729 10:49:53.752239    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:49:53.762632    8229 logs.go:276] 1 containers: [fcf6defc29a4]
	I0729 10:49:53.762647    8229 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:49:53.762652    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:49:53.798676    8229 logs.go:123] Gathering logs for etcd [c4b3c8945276] ...
	I0729 10:49:53.798691    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4b3c8945276"
	I0729 10:49:53.812054    8229 logs.go:123] Gathering logs for kube-scheduler [7cc1c8aea7f7] ...
	I0729 10:49:53.812064    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc1c8aea7f7"
	I0729 10:49:53.823511    8229 logs.go:123] Gathering logs for kube-scheduler [6565a6abc140] ...
	I0729 10:49:53.823526    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6565a6abc140"
	I0729 10:49:53.837046    8229 logs.go:123] Gathering logs for storage-provisioner [fcf6defc29a4] ...
	I0729 10:49:53.837059    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcf6defc29a4"
	I0729 10:49:53.848329    8229 logs.go:123] Gathering logs for dmesg ...
	I0729 10:49:53.848338    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:49:53.852951    8229 logs.go:123] Gathering logs for etcd [c3e1f9023336] ...
	I0729 10:49:53.852957    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3e1f9023336"
	I0729 10:49:53.875892    8229 logs.go:123] Gathering logs for kube-proxy [7243039f43b7] ...
	I0729 10:49:53.875904    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7243039f43b7"
	I0729 10:49:53.887444    8229 logs.go:123] Gathering logs for kubelet ...
	I0729 10:49:53.887455    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:49:53.921541    8229 logs.go:123] Gathering logs for kube-apiserver [8c4ad5249bc8] ...
	I0729 10:49:53.921548    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c4ad5249bc8"
	I0729 10:49:53.935560    8229 logs.go:123] Gathering logs for coredns [92f05bbf9ced] ...
	I0729 10:49:53.935574    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92f05bbf9ced"
	I0729 10:49:53.946938    8229 logs.go:123] Gathering logs for kube-controller-manager [87d43b7d580e] ...
	I0729 10:49:53.946950    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87d43b7d580e"
	I0729 10:49:53.959728    8229 logs.go:123] Gathering logs for Docker ...
	I0729 10:49:53.959742    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:49:53.986268    8229 logs.go:123] Gathering logs for container status ...
	I0729 10:49:53.986275    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:49:53.999381    8229 logs.go:123] Gathering logs for kube-apiserver [90622bb860e2] ...
	I0729 10:49:53.999390    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90622bb860e2"
	I0729 10:49:54.020085    8229 logs.go:123] Gathering logs for kube-controller-manager [4aa9b4b13ef3] ...
	I0729 10:49:54.020095    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4aa9b4b13ef3"
	I0729 10:49:56.540120    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:50:01.542877    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:50:01.543312    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:50:01.583118    8229 logs.go:276] 2 containers: [8c4ad5249bc8 90622bb860e2]
	I0729 10:50:01.583266    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:50:01.604507    8229 logs.go:276] 2 containers: [c3e1f9023336 c4b3c8945276]
	I0729 10:50:01.604618    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:50:01.620470    8229 logs.go:276] 1 containers: [92f05bbf9ced]
	I0729 10:50:01.620544    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:50:01.632834    8229 logs.go:276] 2 containers: [7cc1c8aea7f7 6565a6abc140]
	I0729 10:50:01.632906    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:50:01.646831    8229 logs.go:276] 1 containers: [7243039f43b7]
	I0729 10:50:01.646891    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:50:01.658220    8229 logs.go:276] 2 containers: [4aa9b4b13ef3 87d43b7d580e]
	I0729 10:50:01.658296    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:50:01.668681    8229 logs.go:276] 0 containers: []
	W0729 10:50:01.668690    8229 logs.go:278] No container was found matching "kindnet"
	I0729 10:50:01.668749    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:50:01.678880    8229 logs.go:276] 1 containers: [fcf6defc29a4]
	I0729 10:50:01.678897    8229 logs.go:123] Gathering logs for kube-proxy [7243039f43b7] ...
	I0729 10:50:01.678903    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7243039f43b7"
	I0729 10:50:01.691243    8229 logs.go:123] Gathering logs for kubelet ...
	I0729 10:50:01.691255    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:50:01.726900    8229 logs.go:123] Gathering logs for kube-apiserver [8c4ad5249bc8] ...
	I0729 10:50:01.726909    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c4ad5249bc8"
	I0729 10:50:01.740955    8229 logs.go:123] Gathering logs for kube-scheduler [7cc1c8aea7f7] ...
	I0729 10:50:01.740965    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc1c8aea7f7"
	I0729 10:50:01.752727    8229 logs.go:123] Gathering logs for storage-provisioner [fcf6defc29a4] ...
	I0729 10:50:01.752739    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcf6defc29a4"
	I0729 10:50:01.764612    8229 logs.go:123] Gathering logs for Docker ...
	I0729 10:50:01.764623    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:50:01.791939    8229 logs.go:123] Gathering logs for container status ...
	I0729 10:50:01.791953    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:50:01.809834    8229 logs.go:123] Gathering logs for dmesg ...
	I0729 10:50:01.809846    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:50:01.814113    8229 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:50:01.814122    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:50:01.856338    8229 logs.go:123] Gathering logs for kube-apiserver [90622bb860e2] ...
	I0729 10:50:01.856351    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90622bb860e2"
	I0729 10:50:01.881633    8229 logs.go:123] Gathering logs for etcd [c4b3c8945276] ...
	I0729 10:50:01.881645    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4b3c8945276"
	I0729 10:50:01.895262    8229 logs.go:123] Gathering logs for coredns [92f05bbf9ced] ...
	I0729 10:50:01.895274    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92f05bbf9ced"
	I0729 10:50:01.914218    8229 logs.go:123] Gathering logs for kube-controller-manager [4aa9b4b13ef3] ...
	I0729 10:50:01.914229    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4aa9b4b13ef3"
	I0729 10:50:01.931389    8229 logs.go:123] Gathering logs for kube-controller-manager [87d43b7d580e] ...
	I0729 10:50:01.931402    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87d43b7d580e"
	I0729 10:50:01.942770    8229 logs.go:123] Gathering logs for etcd [c3e1f9023336] ...
	I0729 10:50:01.942783    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3e1f9023336"
	I0729 10:50:01.957366    8229 logs.go:123] Gathering logs for kube-scheduler [6565a6abc140] ...
	I0729 10:50:01.957377    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6565a6abc140"
	I0729 10:50:04.472072    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:50:09.474735    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:50:09.475138    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:50:09.512110    8229 logs.go:276] 2 containers: [8c4ad5249bc8 90622bb860e2]
	I0729 10:50:09.512240    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:50:09.535869    8229 logs.go:276] 2 containers: [c3e1f9023336 c4b3c8945276]
	I0729 10:50:09.535963    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:50:09.553017    8229 logs.go:276] 1 containers: [92f05bbf9ced]
	I0729 10:50:09.553089    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:50:09.564061    8229 logs.go:276] 2 containers: [7cc1c8aea7f7 6565a6abc140]
	I0729 10:50:09.564122    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:50:09.574295    8229 logs.go:276] 1 containers: [7243039f43b7]
	I0729 10:50:09.574352    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:50:09.584565    8229 logs.go:276] 2 containers: [4aa9b4b13ef3 87d43b7d580e]
	I0729 10:50:09.584619    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:50:09.594387    8229 logs.go:276] 0 containers: []
	W0729 10:50:09.594398    8229 logs.go:278] No container was found matching "kindnet"
	I0729 10:50:09.594447    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:50:09.604349    8229 logs.go:276] 1 containers: [fcf6defc29a4]
	I0729 10:50:09.604363    8229 logs.go:123] Gathering logs for kube-apiserver [90622bb860e2] ...
	I0729 10:50:09.604367    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90622bb860e2"
	I0729 10:50:09.625944    8229 logs.go:123] Gathering logs for etcd [c4b3c8945276] ...
	I0729 10:50:09.625959    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4b3c8945276"
	I0729 10:50:09.639581    8229 logs.go:123] Gathering logs for coredns [92f05bbf9ced] ...
	I0729 10:50:09.639592    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92f05bbf9ced"
	I0729 10:50:09.650612    8229 logs.go:123] Gathering logs for kube-scheduler [7cc1c8aea7f7] ...
	I0729 10:50:09.650622    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc1c8aea7f7"
	I0729 10:50:09.662108    8229 logs.go:123] Gathering logs for kube-controller-manager [87d43b7d580e] ...
	I0729 10:50:09.662122    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87d43b7d580e"
	I0729 10:50:09.673417    8229 logs.go:123] Gathering logs for kubelet ...
	I0729 10:50:09.673429    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:50:09.708583    8229 logs.go:123] Gathering logs for kube-scheduler [6565a6abc140] ...
	I0729 10:50:09.708591    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6565a6abc140"
	I0729 10:50:09.720064    8229 logs.go:123] Gathering logs for kube-proxy [7243039f43b7] ...
	I0729 10:50:09.720078    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7243039f43b7"
	I0729 10:50:09.735977    8229 logs.go:123] Gathering logs for etcd [c3e1f9023336] ...
	I0729 10:50:09.735987    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3e1f9023336"
	I0729 10:50:09.750096    8229 logs.go:123] Gathering logs for storage-provisioner [fcf6defc29a4] ...
	I0729 10:50:09.750104    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcf6defc29a4"
	I0729 10:50:09.761921    8229 logs.go:123] Gathering logs for kube-controller-manager [4aa9b4b13ef3] ...
	I0729 10:50:09.761934    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4aa9b4b13ef3"
	I0729 10:50:09.779933    8229 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:50:09.779942    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:50:09.814558    8229 logs.go:123] Gathering logs for kube-apiserver [8c4ad5249bc8] ...
	I0729 10:50:09.814567    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c4ad5249bc8"
	I0729 10:50:09.828951    8229 logs.go:123] Gathering logs for Docker ...
	I0729 10:50:09.828962    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:50:09.856594    8229 logs.go:123] Gathering logs for container status ...
	I0729 10:50:09.856609    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:50:09.869223    8229 logs.go:123] Gathering logs for dmesg ...
	I0729 10:50:09.869232    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:50:12.375750    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:50:17.378214    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:50:17.378657    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:50:17.417865    8229 logs.go:276] 2 containers: [8c4ad5249bc8 90622bb860e2]
	I0729 10:50:17.417997    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:50:17.440431    8229 logs.go:276] 2 containers: [c3e1f9023336 c4b3c8945276]
	I0729 10:50:17.440538    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:50:17.456597    8229 logs.go:276] 1 containers: [92f05bbf9ced]
	I0729 10:50:17.456669    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:50:17.473327    8229 logs.go:276] 2 containers: [7cc1c8aea7f7 6565a6abc140]
	I0729 10:50:17.473404    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:50:17.484130    8229 logs.go:276] 1 containers: [7243039f43b7]
	I0729 10:50:17.484188    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:50:17.494871    8229 logs.go:276] 2 containers: [4aa9b4b13ef3 87d43b7d580e]
	I0729 10:50:17.494934    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:50:17.505201    8229 logs.go:276] 0 containers: []
	W0729 10:50:17.505210    8229 logs.go:278] No container was found matching "kindnet"
	I0729 10:50:17.505261    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:50:17.515653    8229 logs.go:276] 1 containers: [fcf6defc29a4]
	I0729 10:50:17.515670    8229 logs.go:123] Gathering logs for kube-controller-manager [4aa9b4b13ef3] ...
	I0729 10:50:17.515678    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4aa9b4b13ef3"
	I0729 10:50:17.533551    8229 logs.go:123] Gathering logs for storage-provisioner [fcf6defc29a4] ...
	I0729 10:50:17.533564    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcf6defc29a4"
	I0729 10:50:17.545099    8229 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:50:17.545111    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:50:17.581661    8229 logs.go:123] Gathering logs for kube-apiserver [8c4ad5249bc8] ...
	I0729 10:50:17.581673    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c4ad5249bc8"
	I0729 10:50:17.597763    8229 logs.go:123] Gathering logs for kube-proxy [7243039f43b7] ...
	I0729 10:50:17.597771    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7243039f43b7"
	I0729 10:50:17.609175    8229 logs.go:123] Gathering logs for Docker ...
	I0729 10:50:17.609188    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:50:17.635008    8229 logs.go:123] Gathering logs for etcd [c3e1f9023336] ...
	I0729 10:50:17.635017    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3e1f9023336"
	I0729 10:50:17.656746    8229 logs.go:123] Gathering logs for kube-scheduler [7cc1c8aea7f7] ...
	I0729 10:50:17.656759    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc1c8aea7f7"
	I0729 10:50:17.668878    8229 logs.go:123] Gathering logs for kube-scheduler [6565a6abc140] ...
	I0729 10:50:17.668891    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6565a6abc140"
	I0729 10:50:17.680613    8229 logs.go:123] Gathering logs for kubelet ...
	I0729 10:50:17.680625    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:50:17.716685    8229 logs.go:123] Gathering logs for container status ...
	I0729 10:50:17.716696    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:50:17.728187    8229 logs.go:123] Gathering logs for coredns [92f05bbf9ced] ...
	I0729 10:50:17.728200    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92f05bbf9ced"
	I0729 10:50:17.740290    8229 logs.go:123] Gathering logs for kube-controller-manager [87d43b7d580e] ...
	I0729 10:50:17.740301    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87d43b7d580e"
	I0729 10:50:17.751898    8229 logs.go:123] Gathering logs for dmesg ...
	I0729 10:50:17.751909    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:50:17.756225    8229 logs.go:123] Gathering logs for kube-apiserver [90622bb860e2] ...
	I0729 10:50:17.756233    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90622bb860e2"
	I0729 10:50:17.783470    8229 logs.go:123] Gathering logs for etcd [c4b3c8945276] ...
	I0729 10:50:17.783480    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4b3c8945276"
	I0729 10:50:20.298299    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:50:25.300910    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:50:25.301256    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:50:25.333705    8229 logs.go:276] 2 containers: [8c4ad5249bc8 90622bb860e2]
	I0729 10:50:25.333838    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:50:25.352742    8229 logs.go:276] 2 containers: [c3e1f9023336 c4b3c8945276]
	I0729 10:50:25.352826    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:50:25.366671    8229 logs.go:276] 1 containers: [92f05bbf9ced]
	I0729 10:50:25.366742    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:50:25.378100    8229 logs.go:276] 2 containers: [7cc1c8aea7f7 6565a6abc140]
	I0729 10:50:25.378174    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:50:25.388733    8229 logs.go:276] 1 containers: [7243039f43b7]
	I0729 10:50:25.388809    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:50:25.399854    8229 logs.go:276] 2 containers: [4aa9b4b13ef3 87d43b7d580e]
	I0729 10:50:25.399925    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:50:25.409715    8229 logs.go:276] 0 containers: []
	W0729 10:50:25.409726    8229 logs.go:278] No container was found matching "kindnet"
	I0729 10:50:25.409781    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:50:25.420529    8229 logs.go:276] 1 containers: [fcf6defc29a4]
	I0729 10:50:25.420545    8229 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:50:25.420551    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:50:25.454629    8229 logs.go:123] Gathering logs for kube-apiserver [90622bb860e2] ...
	I0729 10:50:25.454641    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90622bb860e2"
	I0729 10:50:25.479842    8229 logs.go:123] Gathering logs for kube-controller-manager [4aa9b4b13ef3] ...
	I0729 10:50:25.479854    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4aa9b4b13ef3"
	I0729 10:50:25.498130    8229 logs.go:123] Gathering logs for Docker ...
	I0729 10:50:25.498140    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:50:25.523882    8229 logs.go:123] Gathering logs for kubelet ...
	I0729 10:50:25.523890    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:50:25.558537    8229 logs.go:123] Gathering logs for kube-apiserver [8c4ad5249bc8] ...
	I0729 10:50:25.558546    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c4ad5249bc8"
	I0729 10:50:25.573433    8229 logs.go:123] Gathering logs for kube-scheduler [7cc1c8aea7f7] ...
	I0729 10:50:25.573444    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc1c8aea7f7"
	I0729 10:50:25.585846    8229 logs.go:123] Gathering logs for kube-proxy [7243039f43b7] ...
	I0729 10:50:25.585860    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7243039f43b7"
	I0729 10:50:25.597330    8229 logs.go:123] Gathering logs for storage-provisioner [fcf6defc29a4] ...
	I0729 10:50:25.597342    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcf6defc29a4"
	I0729 10:50:25.608701    8229 logs.go:123] Gathering logs for coredns [92f05bbf9ced] ...
	I0729 10:50:25.608714    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92f05bbf9ced"
	I0729 10:50:25.619781    8229 logs.go:123] Gathering logs for kube-scheduler [6565a6abc140] ...
	I0729 10:50:25.619793    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6565a6abc140"
	I0729 10:50:25.630799    8229 logs.go:123] Gathering logs for kube-controller-manager [87d43b7d580e] ...
	I0729 10:50:25.630809    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87d43b7d580e"
	I0729 10:50:25.642138    8229 logs.go:123] Gathering logs for container status ...
	I0729 10:50:25.642152    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:50:25.653167    8229 logs.go:123] Gathering logs for dmesg ...
	I0729 10:50:25.653181    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:50:25.657388    8229 logs.go:123] Gathering logs for etcd [c3e1f9023336] ...
	I0729 10:50:25.657394    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3e1f9023336"
	I0729 10:50:25.671455    8229 logs.go:123] Gathering logs for etcd [c4b3c8945276] ...
	I0729 10:50:25.671464    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4b3c8945276"
	I0729 10:50:28.190663    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:50:33.193541    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:50:33.193983    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:50:33.236052    8229 logs.go:276] 2 containers: [8c4ad5249bc8 90622bb860e2]
	I0729 10:50:33.236199    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:50:33.258304    8229 logs.go:276] 2 containers: [c3e1f9023336 c4b3c8945276]
	I0729 10:50:33.258409    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:50:33.277881    8229 logs.go:276] 1 containers: [92f05bbf9ced]
	I0729 10:50:33.277943    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:50:33.295679    8229 logs.go:276] 2 containers: [7cc1c8aea7f7 6565a6abc140]
	I0729 10:50:33.295757    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:50:33.305726    8229 logs.go:276] 1 containers: [7243039f43b7]
	I0729 10:50:33.305796    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:50:33.316544    8229 logs.go:276] 2 containers: [4aa9b4b13ef3 87d43b7d580e]
	I0729 10:50:33.316608    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:50:33.326869    8229 logs.go:276] 0 containers: []
	W0729 10:50:33.326879    8229 logs.go:278] No container was found matching "kindnet"
	I0729 10:50:33.326928    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:50:33.337499    8229 logs.go:276] 1 containers: [fcf6defc29a4]
	I0729 10:50:33.337516    8229 logs.go:123] Gathering logs for etcd [c3e1f9023336] ...
	I0729 10:50:33.337521    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3e1f9023336"
	I0729 10:50:33.351600    8229 logs.go:123] Gathering logs for etcd [c4b3c8945276] ...
	I0729 10:50:33.351612    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4b3c8945276"
	I0729 10:50:33.365103    8229 logs.go:123] Gathering logs for kube-apiserver [8c4ad5249bc8] ...
	I0729 10:50:33.365115    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c4ad5249bc8"
	I0729 10:50:33.382929    8229 logs.go:123] Gathering logs for kube-scheduler [6565a6abc140] ...
	I0729 10:50:33.382938    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6565a6abc140"
	I0729 10:50:33.394636    8229 logs.go:123] Gathering logs for kube-controller-manager [4aa9b4b13ef3] ...
	I0729 10:50:33.394647    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4aa9b4b13ef3"
	I0729 10:50:33.412491    8229 logs.go:123] Gathering logs for storage-provisioner [fcf6defc29a4] ...
	I0729 10:50:33.412501    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcf6defc29a4"
	I0729 10:50:33.424205    8229 logs.go:123] Gathering logs for kubelet ...
	I0729 10:50:33.424217    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:50:33.460430    8229 logs.go:123] Gathering logs for kube-apiserver [90622bb860e2] ...
	I0729 10:50:33.460438    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90622bb860e2"
	I0729 10:50:33.481297    8229 logs.go:123] Gathering logs for kube-scheduler [7cc1c8aea7f7] ...
	I0729 10:50:33.481306    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc1c8aea7f7"
	I0729 10:50:33.492694    8229 logs.go:123] Gathering logs for kube-proxy [7243039f43b7] ...
	I0729 10:50:33.492705    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7243039f43b7"
	I0729 10:50:33.507032    8229 logs.go:123] Gathering logs for dmesg ...
	I0729 10:50:33.507044    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:50:33.511859    8229 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:50:33.511864    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:50:33.545331    8229 logs.go:123] Gathering logs for coredns [92f05bbf9ced] ...
	I0729 10:50:33.545344    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92f05bbf9ced"
	I0729 10:50:33.557507    8229 logs.go:123] Gathering logs for kube-controller-manager [87d43b7d580e] ...
	I0729 10:50:33.557520    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87d43b7d580e"
	I0729 10:50:33.568894    8229 logs.go:123] Gathering logs for Docker ...
	I0729 10:50:33.568907    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:50:33.592909    8229 logs.go:123] Gathering logs for container status ...
	I0729 10:50:33.592919    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:50:36.106879    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:50:41.108749    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:50:41.109184    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:50:41.147320    8229 logs.go:276] 2 containers: [8c4ad5249bc8 90622bb860e2]
	I0729 10:50:41.147461    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:50:41.169678    8229 logs.go:276] 2 containers: [c3e1f9023336 c4b3c8945276]
	I0729 10:50:41.169786    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:50:41.185313    8229 logs.go:276] 1 containers: [92f05bbf9ced]
	I0729 10:50:41.185395    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:50:41.200368    8229 logs.go:276] 2 containers: [7cc1c8aea7f7 6565a6abc140]
	I0729 10:50:41.200441    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:50:41.211662    8229 logs.go:276] 1 containers: [7243039f43b7]
	I0729 10:50:41.211727    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:50:41.222858    8229 logs.go:276] 2 containers: [4aa9b4b13ef3 87d43b7d580e]
	I0729 10:50:41.222916    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:50:41.233377    8229 logs.go:276] 0 containers: []
	W0729 10:50:41.233387    8229 logs.go:278] No container was found matching "kindnet"
	I0729 10:50:41.233441    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:50:41.244075    8229 logs.go:276] 1 containers: [fcf6defc29a4]
	I0729 10:50:41.244094    8229 logs.go:123] Gathering logs for container status ...
	I0729 10:50:41.244101    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:50:41.255988    8229 logs.go:123] Gathering logs for kube-apiserver [90622bb860e2] ...
	I0729 10:50:41.256004    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90622bb860e2"
	I0729 10:50:41.277663    8229 logs.go:123] Gathering logs for kube-controller-manager [4aa9b4b13ef3] ...
	I0729 10:50:41.277672    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4aa9b4b13ef3"
	I0729 10:50:41.294503    8229 logs.go:123] Gathering logs for Docker ...
	I0729 10:50:41.294511    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:50:41.318994    8229 logs.go:123] Gathering logs for coredns [92f05bbf9ced] ...
	I0729 10:50:41.319002    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92f05bbf9ced"
	I0729 10:50:41.330649    8229 logs.go:123] Gathering logs for kube-controller-manager [87d43b7d580e] ...
	I0729 10:50:41.330661    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87d43b7d580e"
	I0729 10:50:41.342096    8229 logs.go:123] Gathering logs for etcd [c3e1f9023336] ...
	I0729 10:50:41.342105    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3e1f9023336"
	I0729 10:50:41.356084    8229 logs.go:123] Gathering logs for etcd [c4b3c8945276] ...
	I0729 10:50:41.356095    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4b3c8945276"
	I0729 10:50:41.369626    8229 logs.go:123] Gathering logs for kube-proxy [7243039f43b7] ...
	I0729 10:50:41.369635    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7243039f43b7"
	I0729 10:50:41.381179    8229 logs.go:123] Gathering logs for storage-provisioner [fcf6defc29a4] ...
	I0729 10:50:41.381189    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcf6defc29a4"
	I0729 10:50:41.392865    8229 logs.go:123] Gathering logs for dmesg ...
	I0729 10:50:41.392874    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:50:41.396973    8229 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:50:41.396980    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:50:41.431571    8229 logs.go:123] Gathering logs for kube-scheduler [7cc1c8aea7f7] ...
	I0729 10:50:41.431580    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc1c8aea7f7"
	I0729 10:50:41.447196    8229 logs.go:123] Gathering logs for kube-scheduler [6565a6abc140] ...
	I0729 10:50:41.447205    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6565a6abc140"
	I0729 10:50:41.458511    8229 logs.go:123] Gathering logs for kubelet ...
	I0729 10:50:41.458523    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:50:41.493877    8229 logs.go:123] Gathering logs for kube-apiserver [8c4ad5249bc8] ...
	I0729 10:50:41.493887    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c4ad5249bc8"
	I0729 10:50:44.009537    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:50:49.011995    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:50:49.012445    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:50:49.050511    8229 logs.go:276] 2 containers: [8c4ad5249bc8 90622bb860e2]
	I0729 10:50:49.050640    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:50:49.075955    8229 logs.go:276] 2 containers: [c3e1f9023336 c4b3c8945276]
	I0729 10:50:49.076056    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:50:49.089857    8229 logs.go:276] 1 containers: [92f05bbf9ced]
	I0729 10:50:49.089930    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:50:49.101149    8229 logs.go:276] 2 containers: [7cc1c8aea7f7 6565a6abc140]
	I0729 10:50:49.101211    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:50:49.112128    8229 logs.go:276] 1 containers: [7243039f43b7]
	I0729 10:50:49.112201    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:50:49.122739    8229 logs.go:276] 2 containers: [4aa9b4b13ef3 87d43b7d580e]
	I0729 10:50:49.122802    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:50:49.133326    8229 logs.go:276] 0 containers: []
	W0729 10:50:49.133338    8229 logs.go:278] No container was found matching "kindnet"
	I0729 10:50:49.133396    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:50:49.143630    8229 logs.go:276] 1 containers: [fcf6defc29a4]
	I0729 10:50:49.143646    8229 logs.go:123] Gathering logs for kube-apiserver [8c4ad5249bc8] ...
	I0729 10:50:49.143651    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c4ad5249bc8"
	I0729 10:50:49.160962    8229 logs.go:123] Gathering logs for kube-apiserver [90622bb860e2] ...
	I0729 10:50:49.160974    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90622bb860e2"
	I0729 10:50:49.180964    8229 logs.go:123] Gathering logs for kubelet ...
	I0729 10:50:49.180976    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:50:49.217109    8229 logs.go:123] Gathering logs for kube-scheduler [7cc1c8aea7f7] ...
	I0729 10:50:49.217127    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc1c8aea7f7"
	I0729 10:50:49.234795    8229 logs.go:123] Gathering logs for kube-proxy [7243039f43b7] ...
	I0729 10:50:49.234806    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7243039f43b7"
	I0729 10:50:49.247210    8229 logs.go:123] Gathering logs for kube-controller-manager [4aa9b4b13ef3] ...
	I0729 10:50:49.247222    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4aa9b4b13ef3"
	I0729 10:50:49.264846    8229 logs.go:123] Gathering logs for storage-provisioner [fcf6defc29a4] ...
	I0729 10:50:49.264858    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcf6defc29a4"
	I0729 10:50:49.279350    8229 logs.go:123] Gathering logs for container status ...
	I0729 10:50:49.279363    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:50:49.291813    8229 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:50:49.291827    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:50:49.332665    8229 logs.go:123] Gathering logs for etcd [c4b3c8945276] ...
	I0729 10:50:49.332679    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4b3c8945276"
	I0729 10:50:49.346337    8229 logs.go:123] Gathering logs for coredns [92f05bbf9ced] ...
	I0729 10:50:49.346348    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92f05bbf9ced"
	I0729 10:50:49.361769    8229 logs.go:123] Gathering logs for kube-scheduler [6565a6abc140] ...
	I0729 10:50:49.361779    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6565a6abc140"
	I0729 10:50:49.373335    8229 logs.go:123] Gathering logs for kube-controller-manager [87d43b7d580e] ...
	I0729 10:50:49.373349    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87d43b7d580e"
	I0729 10:50:49.384594    8229 logs.go:123] Gathering logs for Docker ...
	I0729 10:50:49.384609    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:50:49.408986    8229 logs.go:123] Gathering logs for dmesg ...
	I0729 10:50:49.408993    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:50:49.413068    8229 logs.go:123] Gathering logs for etcd [c3e1f9023336] ...
	I0729 10:50:49.413075    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3e1f9023336"
	I0729 10:50:51.932004    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:50:56.934249    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:50:56.934440    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:50:56.973600    8229 logs.go:276] 2 containers: [8c4ad5249bc8 90622bb860e2]
	I0729 10:50:56.973679    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:50:56.986177    8229 logs.go:276] 2 containers: [c3e1f9023336 c4b3c8945276]
	I0729 10:50:56.986249    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:50:56.997177    8229 logs.go:276] 1 containers: [92f05bbf9ced]
	I0729 10:50:56.997243    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:50:57.007984    8229 logs.go:276] 2 containers: [7cc1c8aea7f7 6565a6abc140]
	I0729 10:50:57.008051    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:50:57.018607    8229 logs.go:276] 1 containers: [7243039f43b7]
	I0729 10:50:57.018672    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:50:57.030851    8229 logs.go:276] 2 containers: [4aa9b4b13ef3 87d43b7d580e]
	I0729 10:50:57.030923    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:50:57.041206    8229 logs.go:276] 0 containers: []
	W0729 10:50:57.041222    8229 logs.go:278] No container was found matching "kindnet"
	I0729 10:50:57.041279    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:50:57.052358    8229 logs.go:276] 1 containers: [fcf6defc29a4]
	I0729 10:50:57.052375    8229 logs.go:123] Gathering logs for kube-controller-manager [87d43b7d580e] ...
	I0729 10:50:57.052381    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87d43b7d580e"
	I0729 10:50:57.064003    8229 logs.go:123] Gathering logs for storage-provisioner [fcf6defc29a4] ...
	I0729 10:50:57.064016    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcf6defc29a4"
	I0729 10:50:57.075389    8229 logs.go:123] Gathering logs for kubelet ...
	I0729 10:50:57.075403    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:50:57.109417    8229 logs.go:123] Gathering logs for kube-apiserver [8c4ad5249bc8] ...
	I0729 10:50:57.109425    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c4ad5249bc8"
	I0729 10:50:57.126193    8229 logs.go:123] Gathering logs for coredns [92f05bbf9ced] ...
	I0729 10:50:57.126205    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92f05bbf9ced"
	I0729 10:50:57.137273    8229 logs.go:123] Gathering logs for kube-scheduler [7cc1c8aea7f7] ...
	I0729 10:50:57.137284    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc1c8aea7f7"
	I0729 10:50:57.154531    8229 logs.go:123] Gathering logs for kube-controller-manager [4aa9b4b13ef3] ...
	I0729 10:50:57.154541    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4aa9b4b13ef3"
	I0729 10:50:57.171422    8229 logs.go:123] Gathering logs for dmesg ...
	I0729 10:50:57.171431    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:50:57.176183    8229 logs.go:123] Gathering logs for kube-scheduler [6565a6abc140] ...
	I0729 10:50:57.176192    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6565a6abc140"
	I0729 10:50:57.186940    8229 logs.go:123] Gathering logs for kube-proxy [7243039f43b7] ...
	I0729 10:50:57.186953    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7243039f43b7"
	I0729 10:50:57.198808    8229 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:50:57.198821    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:50:57.239344    8229 logs.go:123] Gathering logs for Docker ...
	I0729 10:50:57.239356    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:50:57.263803    8229 logs.go:123] Gathering logs for container status ...
	I0729 10:50:57.263811    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:50:57.275564    8229 logs.go:123] Gathering logs for kube-apiserver [90622bb860e2] ...
	I0729 10:50:57.275576    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90622bb860e2"
	I0729 10:50:57.296317    8229 logs.go:123] Gathering logs for etcd [c3e1f9023336] ...
	I0729 10:50:57.296329    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3e1f9023336"
	I0729 10:50:57.310483    8229 logs.go:123] Gathering logs for etcd [c4b3c8945276] ...
	I0729 10:50:57.310494    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4b3c8945276"
	I0729 10:50:59.827198    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:51:04.829884    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:51:04.830079    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:51:04.842062    8229 logs.go:276] 2 containers: [8c4ad5249bc8 90622bb860e2]
	I0729 10:51:04.842139    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:51:04.853287    8229 logs.go:276] 2 containers: [c3e1f9023336 c4b3c8945276]
	I0729 10:51:04.853374    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:51:04.864097    8229 logs.go:276] 1 containers: [92f05bbf9ced]
	I0729 10:51:04.864171    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:51:04.874757    8229 logs.go:276] 2 containers: [7cc1c8aea7f7 6565a6abc140]
	I0729 10:51:04.874840    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:51:04.885298    8229 logs.go:276] 1 containers: [7243039f43b7]
	I0729 10:51:04.885367    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:51:04.896248    8229 logs.go:276] 2 containers: [4aa9b4b13ef3 87d43b7d580e]
	I0729 10:51:04.896315    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:51:04.906635    8229 logs.go:276] 0 containers: []
	W0729 10:51:04.906646    8229 logs.go:278] No container was found matching "kindnet"
	I0729 10:51:04.906710    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:51:04.917406    8229 logs.go:276] 1 containers: [fcf6defc29a4]
	I0729 10:51:04.917422    8229 logs.go:123] Gathering logs for kube-apiserver [90622bb860e2] ...
	I0729 10:51:04.917428    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90622bb860e2"
	I0729 10:51:04.937810    8229 logs.go:123] Gathering logs for kube-proxy [7243039f43b7] ...
	I0729 10:51:04.937822    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7243039f43b7"
	I0729 10:51:04.949860    8229 logs.go:123] Gathering logs for dmesg ...
	I0729 10:51:04.949872    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:51:04.954358    8229 logs.go:123] Gathering logs for coredns [92f05bbf9ced] ...
	I0729 10:51:04.954364    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92f05bbf9ced"
	I0729 10:51:04.965908    8229 logs.go:123] Gathering logs for storage-provisioner [fcf6defc29a4] ...
	I0729 10:51:04.965920    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcf6defc29a4"
	I0729 10:51:04.977948    8229 logs.go:123] Gathering logs for Docker ...
	I0729 10:51:04.977961    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:51:05.002070    8229 logs.go:123] Gathering logs for container status ...
	I0729 10:51:05.002077    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:51:05.013966    8229 logs.go:123] Gathering logs for kubelet ...
	I0729 10:51:05.013977    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:51:05.049970    8229 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:51:05.049982    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:51:05.085885    8229 logs.go:123] Gathering logs for kube-apiserver [8c4ad5249bc8] ...
	I0729 10:51:05.085897    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c4ad5249bc8"
	I0729 10:51:05.110433    8229 logs.go:123] Gathering logs for etcd [c3e1f9023336] ...
	I0729 10:51:05.110445    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3e1f9023336"
	I0729 10:51:05.129174    8229 logs.go:123] Gathering logs for kube-scheduler [7cc1c8aea7f7] ...
	I0729 10:51:05.129184    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc1c8aea7f7"
	I0729 10:51:05.143408    8229 logs.go:123] Gathering logs for kube-scheduler [6565a6abc140] ...
	I0729 10:51:05.143419    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6565a6abc140"
	I0729 10:51:05.154930    8229 logs.go:123] Gathering logs for etcd [c4b3c8945276] ...
	I0729 10:51:05.154942    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4b3c8945276"
	I0729 10:51:05.171580    8229 logs.go:123] Gathering logs for kube-controller-manager [4aa9b4b13ef3] ...
	I0729 10:51:05.171594    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4aa9b4b13ef3"
	I0729 10:51:05.189375    8229 logs.go:123] Gathering logs for kube-controller-manager [87d43b7d580e] ...
	I0729 10:51:05.189387    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87d43b7d580e"
	I0729 10:51:07.705739    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:51:12.708378    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:51:12.708763    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:51:12.740919    8229 logs.go:276] 2 containers: [8c4ad5249bc8 90622bb860e2]
	I0729 10:51:12.741049    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:51:12.759338    8229 logs.go:276] 2 containers: [c3e1f9023336 c4b3c8945276]
	I0729 10:51:12.759420    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:51:12.773714    8229 logs.go:276] 1 containers: [92f05bbf9ced]
	I0729 10:51:12.773799    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:51:12.785970    8229 logs.go:276] 2 containers: [7cc1c8aea7f7 6565a6abc140]
	I0729 10:51:12.786043    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:51:12.797269    8229 logs.go:276] 1 containers: [7243039f43b7]
	I0729 10:51:12.797330    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:51:12.808475    8229 logs.go:276] 2 containers: [4aa9b4b13ef3 87d43b7d580e]
	I0729 10:51:12.808544    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:51:12.818618    8229 logs.go:276] 0 containers: []
	W0729 10:51:12.818629    8229 logs.go:278] No container was found matching "kindnet"
	I0729 10:51:12.818686    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:51:12.829268    8229 logs.go:276] 1 containers: [fcf6defc29a4]
	I0729 10:51:12.829284    8229 logs.go:123] Gathering logs for kubelet ...
	I0729 10:51:12.829290    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:51:12.864619    8229 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:51:12.864627    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:51:12.898080    8229 logs.go:123] Gathering logs for kube-controller-manager [4aa9b4b13ef3] ...
	I0729 10:51:12.898090    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4aa9b4b13ef3"
	I0729 10:51:12.916618    8229 logs.go:123] Gathering logs for container status ...
	I0729 10:51:12.916632    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:51:12.928268    8229 logs.go:123] Gathering logs for etcd [c3e1f9023336] ...
	I0729 10:51:12.928281    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3e1f9023336"
	I0729 10:51:12.942862    8229 logs.go:123] Gathering logs for coredns [92f05bbf9ced] ...
	I0729 10:51:12.942874    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92f05bbf9ced"
	I0729 10:51:12.955357    8229 logs.go:123] Gathering logs for storage-provisioner [fcf6defc29a4] ...
	I0729 10:51:12.955367    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcf6defc29a4"
	I0729 10:51:12.967374    8229 logs.go:123] Gathering logs for Docker ...
	I0729 10:51:12.967387    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:51:12.993115    8229 logs.go:123] Gathering logs for kube-apiserver [90622bb860e2] ...
	I0729 10:51:12.993131    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90622bb860e2"
	I0729 10:51:13.014414    8229 logs.go:123] Gathering logs for kube-scheduler [6565a6abc140] ...
	I0729 10:51:13.014425    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6565a6abc140"
	I0729 10:51:13.025674    8229 logs.go:123] Gathering logs for kube-proxy [7243039f43b7] ...
	I0729 10:51:13.025685    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7243039f43b7"
	I0729 10:51:13.037516    8229 logs.go:123] Gathering logs for kube-controller-manager [87d43b7d580e] ...
	I0729 10:51:13.037529    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87d43b7d580e"
	I0729 10:51:13.049427    8229 logs.go:123] Gathering logs for dmesg ...
	I0729 10:51:13.049439    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:51:13.053868    8229 logs.go:123] Gathering logs for kube-apiserver [8c4ad5249bc8] ...
	I0729 10:51:13.053876    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c4ad5249bc8"
	I0729 10:51:13.068714    8229 logs.go:123] Gathering logs for etcd [c4b3c8945276] ...
	I0729 10:51:13.068724    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4b3c8945276"
	I0729 10:51:13.082512    8229 logs.go:123] Gathering logs for kube-scheduler [7cc1c8aea7f7] ...
	I0729 10:51:13.082522    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc1c8aea7f7"
	I0729 10:51:15.595080    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:51:20.597190    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:51:20.597355    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:51:20.611071    8229 logs.go:276] 2 containers: [8c4ad5249bc8 90622bb860e2]
	I0729 10:51:20.611148    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:51:20.622676    8229 logs.go:276] 2 containers: [c3e1f9023336 c4b3c8945276]
	I0729 10:51:20.622743    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:51:20.633216    8229 logs.go:276] 1 containers: [92f05bbf9ced]
	I0729 10:51:20.633279    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:51:20.643933    8229 logs.go:276] 2 containers: [7cc1c8aea7f7 6565a6abc140]
	I0729 10:51:20.644002    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:51:20.654473    8229 logs.go:276] 1 containers: [7243039f43b7]
	I0729 10:51:20.654535    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:51:20.664675    8229 logs.go:276] 2 containers: [4aa9b4b13ef3 87d43b7d580e]
	I0729 10:51:20.664748    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:51:20.675036    8229 logs.go:276] 0 containers: []
	W0729 10:51:20.675046    8229 logs.go:278] No container was found matching "kindnet"
	I0729 10:51:20.675098    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:51:20.687470    8229 logs.go:276] 1 containers: [fcf6defc29a4]
	I0729 10:51:20.687489    8229 logs.go:123] Gathering logs for kube-controller-manager [4aa9b4b13ef3] ...
	I0729 10:51:20.687494    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4aa9b4b13ef3"
	I0729 10:51:20.705596    8229 logs.go:123] Gathering logs for Docker ...
	I0729 10:51:20.705607    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:51:20.731016    8229 logs.go:123] Gathering logs for kube-apiserver [90622bb860e2] ...
	I0729 10:51:20.731022    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90622bb860e2"
	I0729 10:51:20.750957    8229 logs.go:123] Gathering logs for kube-scheduler [6565a6abc140] ...
	I0729 10:51:20.750968    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6565a6abc140"
	I0729 10:51:20.762237    8229 logs.go:123] Gathering logs for kube-controller-manager [87d43b7d580e] ...
	I0729 10:51:20.762249    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87d43b7d580e"
	I0729 10:51:20.773548    8229 logs.go:123] Gathering logs for etcd [c3e1f9023336] ...
	I0729 10:51:20.773561    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3e1f9023336"
	I0729 10:51:20.796175    8229 logs.go:123] Gathering logs for etcd [c4b3c8945276] ...
	I0729 10:51:20.796188    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4b3c8945276"
	I0729 10:51:20.809927    8229 logs.go:123] Gathering logs for storage-provisioner [fcf6defc29a4] ...
	I0729 10:51:20.809938    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcf6defc29a4"
	I0729 10:51:20.821725    8229 logs.go:123] Gathering logs for container status ...
	I0729 10:51:20.821739    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:51:20.834355    8229 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:51:20.834369    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:51:20.869762    8229 logs.go:123] Gathering logs for dmesg ...
	I0729 10:51:20.869774    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:51:20.874131    8229 logs.go:123] Gathering logs for kube-apiserver [8c4ad5249bc8] ...
	I0729 10:51:20.874138    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c4ad5249bc8"
	I0729 10:51:20.893302    8229 logs.go:123] Gathering logs for coredns [92f05bbf9ced] ...
	I0729 10:51:20.893316    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92f05bbf9ced"
	I0729 10:51:20.906528    8229 logs.go:123] Gathering logs for kube-scheduler [7cc1c8aea7f7] ...
	I0729 10:51:20.906539    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc1c8aea7f7"
	I0729 10:51:20.918204    8229 logs.go:123] Gathering logs for kube-proxy [7243039f43b7] ...
	I0729 10:51:20.918217    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7243039f43b7"
	I0729 10:51:20.929730    8229 logs.go:123] Gathering logs for kubelet ...
	I0729 10:51:20.929740    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:51:23.468208    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:51:28.470449    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:51:28.470642    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:51:28.485220    8229 logs.go:276] 2 containers: [8c4ad5249bc8 90622bb860e2]
	I0729 10:51:28.485302    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:51:28.497492    8229 logs.go:276] 2 containers: [c3e1f9023336 c4b3c8945276]
	I0729 10:51:28.497568    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:51:28.510803    8229 logs.go:276] 1 containers: [92f05bbf9ced]
	I0729 10:51:28.510875    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:51:28.520920    8229 logs.go:276] 2 containers: [7cc1c8aea7f7 6565a6abc140]
	I0729 10:51:28.520988    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:51:28.531368    8229 logs.go:276] 1 containers: [7243039f43b7]
	I0729 10:51:28.531433    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:51:28.545187    8229 logs.go:276] 2 containers: [4aa9b4b13ef3 87d43b7d580e]
	I0729 10:51:28.545255    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:51:28.557754    8229 logs.go:276] 0 containers: []
	W0729 10:51:28.557766    8229 logs.go:278] No container was found matching "kindnet"
	I0729 10:51:28.557833    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:51:28.568701    8229 logs.go:276] 1 containers: [fcf6defc29a4]
	I0729 10:51:28.568719    8229 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:51:28.568725    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:51:28.603745    8229 logs.go:123] Gathering logs for kube-apiserver [90622bb860e2] ...
	I0729 10:51:28.603759    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90622bb860e2"
	I0729 10:51:28.625268    8229 logs.go:123] Gathering logs for etcd [c3e1f9023336] ...
	I0729 10:51:28.625282    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3e1f9023336"
	I0729 10:51:28.640098    8229 logs.go:123] Gathering logs for kube-scheduler [6565a6abc140] ...
	I0729 10:51:28.640109    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6565a6abc140"
	I0729 10:51:28.651691    8229 logs.go:123] Gathering logs for kube-controller-manager [87d43b7d580e] ...
	I0729 10:51:28.651703    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87d43b7d580e"
	I0729 10:51:28.665398    8229 logs.go:123] Gathering logs for etcd [c4b3c8945276] ...
	I0729 10:51:28.665412    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4b3c8945276"
	I0729 10:51:28.679002    8229 logs.go:123] Gathering logs for storage-provisioner [fcf6defc29a4] ...
	I0729 10:51:28.679015    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcf6defc29a4"
	I0729 10:51:28.693791    8229 logs.go:123] Gathering logs for kubelet ...
	I0729 10:51:28.693803    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:51:28.730054    8229 logs.go:123] Gathering logs for kube-controller-manager [4aa9b4b13ef3] ...
	I0729 10:51:28.730073    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4aa9b4b13ef3"
	I0729 10:51:28.748257    8229 logs.go:123] Gathering logs for Docker ...
	I0729 10:51:28.748270    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:51:28.773158    8229 logs.go:123] Gathering logs for container status ...
	I0729 10:51:28.773166    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:51:28.784211    8229 logs.go:123] Gathering logs for dmesg ...
	I0729 10:51:28.784222    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:51:28.788457    8229 logs.go:123] Gathering logs for kube-apiserver [8c4ad5249bc8] ...
	I0729 10:51:28.788467    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c4ad5249bc8"
	I0729 10:51:28.802783    8229 logs.go:123] Gathering logs for coredns [92f05bbf9ced] ...
	I0729 10:51:28.802793    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92f05bbf9ced"
	I0729 10:51:28.817884    8229 logs.go:123] Gathering logs for kube-scheduler [7cc1c8aea7f7] ...
	I0729 10:51:28.817894    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc1c8aea7f7"
	I0729 10:51:28.829039    8229 logs.go:123] Gathering logs for kube-proxy [7243039f43b7] ...
	I0729 10:51:28.829049    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7243039f43b7"
	I0729 10:51:31.340908    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:51:36.343019    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:51:36.343157    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:51:36.354601    8229 logs.go:276] 2 containers: [8c4ad5249bc8 90622bb860e2]
	I0729 10:51:36.354672    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:51:36.366634    8229 logs.go:276] 2 containers: [c3e1f9023336 c4b3c8945276]
	I0729 10:51:36.366710    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:51:36.377591    8229 logs.go:276] 1 containers: [92f05bbf9ced]
	I0729 10:51:36.377654    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:51:36.388445    8229 logs.go:276] 2 containers: [7cc1c8aea7f7 6565a6abc140]
	I0729 10:51:36.388508    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:51:36.399683    8229 logs.go:276] 1 containers: [7243039f43b7]
	I0729 10:51:36.399748    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:51:36.410070    8229 logs.go:276] 2 containers: [4aa9b4b13ef3 87d43b7d580e]
	I0729 10:51:36.410130    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:51:36.420179    8229 logs.go:276] 0 containers: []
	W0729 10:51:36.420189    8229 logs.go:278] No container was found matching "kindnet"
	I0729 10:51:36.420258    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:51:36.436844    8229 logs.go:276] 1 containers: [fcf6defc29a4]
	I0729 10:51:36.436863    8229 logs.go:123] Gathering logs for kube-controller-manager [87d43b7d580e] ...
	I0729 10:51:36.436868    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87d43b7d580e"
	I0729 10:51:36.449814    8229 logs.go:123] Gathering logs for Docker ...
	I0729 10:51:36.449825    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:51:36.474078    8229 logs.go:123] Gathering logs for dmesg ...
	I0729 10:51:36.474089    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:51:36.478244    8229 logs.go:123] Gathering logs for etcd [c3e1f9023336] ...
	I0729 10:51:36.478252    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3e1f9023336"
	I0729 10:51:36.492059    8229 logs.go:123] Gathering logs for kube-scheduler [7cc1c8aea7f7] ...
	I0729 10:51:36.492069    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc1c8aea7f7"
	I0729 10:51:36.504115    8229 logs.go:123] Gathering logs for kube-controller-manager [4aa9b4b13ef3] ...
	I0729 10:51:36.504126    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4aa9b4b13ef3"
	I0729 10:51:36.521628    8229 logs.go:123] Gathering logs for kube-scheduler [6565a6abc140] ...
	I0729 10:51:36.521641    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6565a6abc140"
	I0729 10:51:36.533129    8229 logs.go:123] Gathering logs for kubelet ...
	I0729 10:51:36.533141    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:51:36.566043    8229 logs.go:123] Gathering logs for kube-apiserver [8c4ad5249bc8] ...
	I0729 10:51:36.566051    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c4ad5249bc8"
	I0729 10:51:36.580161    8229 logs.go:123] Gathering logs for etcd [c4b3c8945276] ...
	I0729 10:51:36.580174    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4b3c8945276"
	I0729 10:51:36.594946    8229 logs.go:123] Gathering logs for coredns [92f05bbf9ced] ...
	I0729 10:51:36.594956    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92f05bbf9ced"
	I0729 10:51:36.606736    8229 logs.go:123] Gathering logs for kube-apiserver [90622bb860e2] ...
	I0729 10:51:36.606748    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90622bb860e2"
	I0729 10:51:36.631078    8229 logs.go:123] Gathering logs for kube-proxy [7243039f43b7] ...
	I0729 10:51:36.631088    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7243039f43b7"
	I0729 10:51:36.642693    8229 logs.go:123] Gathering logs for storage-provisioner [fcf6defc29a4] ...
	I0729 10:51:36.642705    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcf6defc29a4"
	I0729 10:51:36.653927    8229 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:51:36.653937    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:51:36.689561    8229 logs.go:123] Gathering logs for container status ...
	I0729 10:51:36.689576    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:51:39.203491    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:51:44.206081    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:51:44.206416    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:51:44.234846    8229 logs.go:276] 2 containers: [8c4ad5249bc8 90622bb860e2]
	I0729 10:51:44.234975    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:51:44.253329    8229 logs.go:276] 2 containers: [c3e1f9023336 c4b3c8945276]
	I0729 10:51:44.253426    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:51:44.274655    8229 logs.go:276] 1 containers: [92f05bbf9ced]
	I0729 10:51:44.274721    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:51:44.285522    8229 logs.go:276] 2 containers: [7cc1c8aea7f7 6565a6abc140]
	I0729 10:51:44.285592    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:51:44.296198    8229 logs.go:276] 1 containers: [7243039f43b7]
	I0729 10:51:44.296271    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:51:44.307223    8229 logs.go:276] 2 containers: [4aa9b4b13ef3 87d43b7d580e]
	I0729 10:51:44.307293    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:51:44.317490    8229 logs.go:276] 0 containers: []
	W0729 10:51:44.317500    8229 logs.go:278] No container was found matching "kindnet"
	I0729 10:51:44.317558    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:51:44.328413    8229 logs.go:276] 1 containers: [fcf6defc29a4]
	I0729 10:51:44.328429    8229 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:51:44.328436    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:51:44.367254    8229 logs.go:123] Gathering logs for kube-apiserver [8c4ad5249bc8] ...
	I0729 10:51:44.367265    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c4ad5249bc8"
	I0729 10:51:44.381767    8229 logs.go:123] Gathering logs for etcd [c3e1f9023336] ...
	I0729 10:51:44.381778    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3e1f9023336"
	I0729 10:51:44.395622    8229 logs.go:123] Gathering logs for dmesg ...
	I0729 10:51:44.395634    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:51:44.399840    8229 logs.go:123] Gathering logs for etcd [c4b3c8945276] ...
	I0729 10:51:44.399848    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4b3c8945276"
	I0729 10:51:44.412769    8229 logs.go:123] Gathering logs for coredns [92f05bbf9ced] ...
	I0729 10:51:44.412778    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92f05bbf9ced"
	I0729 10:51:44.424130    8229 logs.go:123] Gathering logs for kube-controller-manager [4aa9b4b13ef3] ...
	I0729 10:51:44.424150    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4aa9b4b13ef3"
	I0729 10:51:44.441957    8229 logs.go:123] Gathering logs for kube-controller-manager [87d43b7d580e] ...
	I0729 10:51:44.441968    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87d43b7d580e"
	I0729 10:51:44.453168    8229 logs.go:123] Gathering logs for Docker ...
	I0729 10:51:44.453179    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:51:44.478316    8229 logs.go:123] Gathering logs for kube-apiserver [90622bb860e2] ...
	I0729 10:51:44.478323    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90622bb860e2"
	I0729 10:51:44.499866    8229 logs.go:123] Gathering logs for kube-scheduler [7cc1c8aea7f7] ...
	I0729 10:51:44.499875    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc1c8aea7f7"
	I0729 10:51:44.511503    8229 logs.go:123] Gathering logs for kube-scheduler [6565a6abc140] ...
	I0729 10:51:44.511514    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6565a6abc140"
	I0729 10:51:44.522494    8229 logs.go:123] Gathering logs for kube-proxy [7243039f43b7] ...
	I0729 10:51:44.522509    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7243039f43b7"
	I0729 10:51:44.542058    8229 logs.go:123] Gathering logs for storage-provisioner [fcf6defc29a4] ...
	I0729 10:51:44.542068    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcf6defc29a4"
	I0729 10:51:44.553217    8229 logs.go:123] Gathering logs for kubelet ...
	I0729 10:51:44.553228    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:51:44.588108    8229 logs.go:123] Gathering logs for container status ...
	I0729 10:51:44.588123    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:51:47.101580    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:51:52.103916    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:51:52.104364    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:51:52.152770    8229 logs.go:276] 2 containers: [8c4ad5249bc8 90622bb860e2]
	I0729 10:51:52.152917    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:51:52.172331    8229 logs.go:276] 2 containers: [c3e1f9023336 c4b3c8945276]
	I0729 10:51:52.172435    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:51:52.186785    8229 logs.go:276] 1 containers: [92f05bbf9ced]
	I0729 10:51:52.186860    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:51:52.199600    8229 logs.go:276] 2 containers: [7cc1c8aea7f7 6565a6abc140]
	I0729 10:51:52.199674    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:51:52.215433    8229 logs.go:276] 1 containers: [7243039f43b7]
	I0729 10:51:52.215503    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:51:52.226126    8229 logs.go:276] 2 containers: [4aa9b4b13ef3 87d43b7d580e]
	I0729 10:51:52.226201    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:51:52.236847    8229 logs.go:276] 0 containers: []
	W0729 10:51:52.236858    8229 logs.go:278] No container was found matching "kindnet"
	I0729 10:51:52.236911    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:51:52.247181    8229 logs.go:276] 1 containers: [fcf6defc29a4]
	I0729 10:51:52.247199    8229 logs.go:123] Gathering logs for kubelet ...
	I0729 10:51:52.247205    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:51:52.282750    8229 logs.go:123] Gathering logs for kube-apiserver [90622bb860e2] ...
	I0729 10:51:52.282759    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90622bb860e2"
	I0729 10:51:52.304379    8229 logs.go:123] Gathering logs for etcd [c4b3c8945276] ...
	I0729 10:51:52.304389    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4b3c8945276"
	I0729 10:51:52.322807    8229 logs.go:123] Gathering logs for kube-proxy [7243039f43b7] ...
	I0729 10:51:52.322820    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7243039f43b7"
	I0729 10:51:52.334444    8229 logs.go:123] Gathering logs for etcd [c3e1f9023336] ...
	I0729 10:51:52.334454    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3e1f9023336"
	I0729 10:51:52.351526    8229 logs.go:123] Gathering logs for coredns [92f05bbf9ced] ...
	I0729 10:51:52.351538    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92f05bbf9ced"
	I0729 10:51:52.362437    8229 logs.go:123] Gathering logs for Docker ...
	I0729 10:51:52.362450    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:51:52.387543    8229 logs.go:123] Gathering logs for dmesg ...
	I0729 10:51:52.387554    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:51:52.391718    8229 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:51:52.391727    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:51:52.427266    8229 logs.go:123] Gathering logs for kube-controller-manager [4aa9b4b13ef3] ...
	I0729 10:51:52.427278    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4aa9b4b13ef3"
	I0729 10:51:52.448243    8229 logs.go:123] Gathering logs for kube-controller-manager [87d43b7d580e] ...
	I0729 10:51:52.448255    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87d43b7d580e"
	I0729 10:51:52.459576    8229 logs.go:123] Gathering logs for storage-provisioner [fcf6defc29a4] ...
	I0729 10:51:52.459589    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcf6defc29a4"
	I0729 10:51:52.471470    8229 logs.go:123] Gathering logs for kube-apiserver [8c4ad5249bc8] ...
	I0729 10:51:52.471480    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c4ad5249bc8"
	I0729 10:51:52.486665    8229 logs.go:123] Gathering logs for kube-scheduler [7cc1c8aea7f7] ...
	I0729 10:51:52.486676    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc1c8aea7f7"
	I0729 10:51:52.499143    8229 logs.go:123] Gathering logs for kube-scheduler [6565a6abc140] ...
	I0729 10:51:52.499154    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6565a6abc140"
	I0729 10:51:52.510911    8229 logs.go:123] Gathering logs for container status ...
	I0729 10:51:52.510925    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:51:55.025669    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:52:00.025980    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:52:00.026117    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:52:00.037435    8229 logs.go:276] 2 containers: [8c4ad5249bc8 90622bb860e2]
	I0729 10:52:00.037509    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:52:00.048456    8229 logs.go:276] 2 containers: [c3e1f9023336 c4b3c8945276]
	I0729 10:52:00.048537    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:52:00.059270    8229 logs.go:276] 1 containers: [92f05bbf9ced]
	I0729 10:52:00.059340    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:52:00.070375    8229 logs.go:276] 2 containers: [7cc1c8aea7f7 6565a6abc140]
	I0729 10:52:00.070447    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:52:00.083226    8229 logs.go:276] 1 containers: [7243039f43b7]
	I0729 10:52:00.083304    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:52:00.095024    8229 logs.go:276] 2 containers: [4aa9b4b13ef3 87d43b7d580e]
	I0729 10:52:00.095107    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:52:00.107078    8229 logs.go:276] 0 containers: []
	W0729 10:52:00.107090    8229 logs.go:278] No container was found matching "kindnet"
	I0729 10:52:00.107152    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:52:00.117747    8229 logs.go:276] 1 containers: [fcf6defc29a4]
	I0729 10:52:00.117764    8229 logs.go:123] Gathering logs for kube-controller-manager [87d43b7d580e] ...
	I0729 10:52:00.117770    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87d43b7d580e"
	I0729 10:52:00.133472    8229 logs.go:123] Gathering logs for dmesg ...
	I0729 10:52:00.133484    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:52:00.137919    8229 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:52:00.137935    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:52:00.173011    8229 logs.go:123] Gathering logs for kube-scheduler [6565a6abc140] ...
	I0729 10:52:00.173021    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6565a6abc140"
	I0729 10:52:00.184449    8229 logs.go:123] Gathering logs for kube-controller-manager [4aa9b4b13ef3] ...
	I0729 10:52:00.184462    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4aa9b4b13ef3"
	I0729 10:52:00.201233    8229 logs.go:123] Gathering logs for storage-provisioner [fcf6defc29a4] ...
	I0729 10:52:00.201244    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcf6defc29a4"
	I0729 10:52:00.212994    8229 logs.go:123] Gathering logs for container status ...
	I0729 10:52:00.213005    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:52:00.226140    8229 logs.go:123] Gathering logs for kubelet ...
	I0729 10:52:00.226156    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:52:00.263516    8229 logs.go:123] Gathering logs for kube-apiserver [90622bb860e2] ...
	I0729 10:52:00.263527    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90622bb860e2"
	I0729 10:52:00.285052    8229 logs.go:123] Gathering logs for etcd [c3e1f9023336] ...
	I0729 10:52:00.285068    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3e1f9023336"
	I0729 10:52:00.299844    8229 logs.go:123] Gathering logs for kube-scheduler [7cc1c8aea7f7] ...
	I0729 10:52:00.299855    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc1c8aea7f7"
	I0729 10:52:00.315911    8229 logs.go:123] Gathering logs for etcd [c4b3c8945276] ...
	I0729 10:52:00.315923    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4b3c8945276"
	I0729 10:52:00.329670    8229 logs.go:123] Gathering logs for coredns [92f05bbf9ced] ...
	I0729 10:52:00.329682    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92f05bbf9ced"
	I0729 10:52:00.347228    8229 logs.go:123] Gathering logs for kube-apiserver [8c4ad5249bc8] ...
	I0729 10:52:00.347240    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c4ad5249bc8"
	I0729 10:52:00.362611    8229 logs.go:123] Gathering logs for kube-proxy [7243039f43b7] ...
	I0729 10:52:00.362623    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7243039f43b7"
	I0729 10:52:00.376063    8229 logs.go:123] Gathering logs for Docker ...
	I0729 10:52:00.376075    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:52:02.903091    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:52:07.905290    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:52:07.905564    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:52:07.923837    8229 logs.go:276] 2 containers: [8c4ad5249bc8 90622bb860e2]
	I0729 10:52:07.923935    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:52:07.937551    8229 logs.go:276] 2 containers: [c3e1f9023336 c4b3c8945276]
	I0729 10:52:07.937624    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:52:07.949172    8229 logs.go:276] 1 containers: [92f05bbf9ced]
	I0729 10:52:07.949243    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:52:07.960099    8229 logs.go:276] 2 containers: [7cc1c8aea7f7 6565a6abc140]
	I0729 10:52:07.960168    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:52:07.970964    8229 logs.go:276] 1 containers: [7243039f43b7]
	I0729 10:52:07.971039    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:52:07.983620    8229 logs.go:276] 2 containers: [4aa9b4b13ef3 87d43b7d580e]
	I0729 10:52:07.983682    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:52:07.999731    8229 logs.go:276] 0 containers: []
	W0729 10:52:07.999742    8229 logs.go:278] No container was found matching "kindnet"
	I0729 10:52:07.999800    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:52:08.011572    8229 logs.go:276] 1 containers: [fcf6defc29a4]
	I0729 10:52:08.011588    8229 logs.go:123] Gathering logs for dmesg ...
	I0729 10:52:08.011596    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:52:08.016118    8229 logs.go:123] Gathering logs for etcd [c3e1f9023336] ...
	I0729 10:52:08.016128    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3e1f9023336"
	I0729 10:52:08.030258    8229 logs.go:123] Gathering logs for storage-provisioner [fcf6defc29a4] ...
	I0729 10:52:08.030272    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcf6defc29a4"
	I0729 10:52:08.043274    8229 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:52:08.043285    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:52:08.083298    8229 logs.go:123] Gathering logs for kube-scheduler [6565a6abc140] ...
	I0729 10:52:08.083310    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6565a6abc140"
	I0729 10:52:08.095237    8229 logs.go:123] Gathering logs for kube-controller-manager [4aa9b4b13ef3] ...
	I0729 10:52:08.095251    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4aa9b4b13ef3"
	I0729 10:52:08.114335    8229 logs.go:123] Gathering logs for kube-controller-manager [87d43b7d580e] ...
	I0729 10:52:08.114345    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87d43b7d580e"
	I0729 10:52:08.126068    8229 logs.go:123] Gathering logs for kube-apiserver [8c4ad5249bc8] ...
	I0729 10:52:08.126080    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c4ad5249bc8"
	I0729 10:52:08.143725    8229 logs.go:123] Gathering logs for kube-proxy [7243039f43b7] ...
	I0729 10:52:08.143735    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7243039f43b7"
	I0729 10:52:08.159160    8229 logs.go:123] Gathering logs for kubelet ...
	I0729 10:52:08.159170    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:52:08.194396    8229 logs.go:123] Gathering logs for kube-apiserver [90622bb860e2] ...
	I0729 10:52:08.194412    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90622bb860e2"
	I0729 10:52:08.214638    8229 logs.go:123] Gathering logs for etcd [c4b3c8945276] ...
	I0729 10:52:08.214652    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4b3c8945276"
	I0729 10:52:08.228454    8229 logs.go:123] Gathering logs for coredns [92f05bbf9ced] ...
	I0729 10:52:08.228466    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92f05bbf9ced"
	I0729 10:52:08.243143    8229 logs.go:123] Gathering logs for kube-scheduler [7cc1c8aea7f7] ...
	I0729 10:52:08.243155    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc1c8aea7f7"
	I0729 10:52:08.255988    8229 logs.go:123] Gathering logs for Docker ...
	I0729 10:52:08.256002    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:52:08.280201    8229 logs.go:123] Gathering logs for container status ...
	I0729 10:52:08.280212    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:52:10.794488    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:52:15.794603    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:52:15.794786    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:52:15.806596    8229 logs.go:276] 2 containers: [8c4ad5249bc8 90622bb860e2]
	I0729 10:52:15.806671    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:52:15.818427    8229 logs.go:276] 2 containers: [c3e1f9023336 c4b3c8945276]
	I0729 10:52:15.818504    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:52:15.831323    8229 logs.go:276] 1 containers: [92f05bbf9ced]
	I0729 10:52:15.831397    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:52:15.846398    8229 logs.go:276] 2 containers: [7cc1c8aea7f7 6565a6abc140]
	I0729 10:52:15.846473    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:52:15.858021    8229 logs.go:276] 1 containers: [7243039f43b7]
	I0729 10:52:15.858100    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:52:15.878597    8229 logs.go:276] 2 containers: [4aa9b4b13ef3 87d43b7d580e]
	I0729 10:52:15.878668    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:52:15.889590    8229 logs.go:276] 0 containers: []
	W0729 10:52:15.889601    8229 logs.go:278] No container was found matching "kindnet"
	I0729 10:52:15.889666    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:52:15.901057    8229 logs.go:276] 1 containers: [fcf6defc29a4]
	I0729 10:52:15.901074    8229 logs.go:123] Gathering logs for container status ...
	I0729 10:52:15.901080    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:52:15.916852    8229 logs.go:123] Gathering logs for kube-scheduler [7cc1c8aea7f7] ...
	I0729 10:52:15.916869    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc1c8aea7f7"
	I0729 10:52:15.930369    8229 logs.go:123] Gathering logs for kube-proxy [7243039f43b7] ...
	I0729 10:52:15.930381    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7243039f43b7"
	I0729 10:52:15.945130    8229 logs.go:123] Gathering logs for kube-controller-manager [4aa9b4b13ef3] ...
	I0729 10:52:15.945158    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4aa9b4b13ef3"
	I0729 10:52:15.964565    8229 logs.go:123] Gathering logs for Docker ...
	I0729 10:52:15.964576    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:52:15.989172    8229 logs.go:123] Gathering logs for kubelet ...
	I0729 10:52:15.989188    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:52:16.029428    8229 logs.go:123] Gathering logs for kube-apiserver [8c4ad5249bc8] ...
	I0729 10:52:16.029441    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c4ad5249bc8"
	I0729 10:52:16.043911    8229 logs.go:123] Gathering logs for etcd [c3e1f9023336] ...
	I0729 10:52:16.043922    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3e1f9023336"
	I0729 10:52:16.059130    8229 logs.go:123] Gathering logs for etcd [c4b3c8945276] ...
	I0729 10:52:16.059140    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4b3c8945276"
	I0729 10:52:16.073273    8229 logs.go:123] Gathering logs for coredns [92f05bbf9ced] ...
	I0729 10:52:16.073285    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92f05bbf9ced"
	I0729 10:52:16.084937    8229 logs.go:123] Gathering logs for kube-controller-manager [87d43b7d580e] ...
	I0729 10:52:16.084949    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87d43b7d580e"
	I0729 10:52:16.097198    8229 logs.go:123] Gathering logs for storage-provisioner [fcf6defc29a4] ...
	I0729 10:52:16.097215    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcf6defc29a4"
	I0729 10:52:16.110661    8229 logs.go:123] Gathering logs for dmesg ...
	I0729 10:52:16.110676    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:52:16.115303    8229 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:52:16.115310    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:52:16.152333    8229 logs.go:123] Gathering logs for kube-apiserver [90622bb860e2] ...
	I0729 10:52:16.152345    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90622bb860e2"
	I0729 10:52:16.174869    8229 logs.go:123] Gathering logs for kube-scheduler [6565a6abc140] ...
	I0729 10:52:16.174883    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6565a6abc140"
	I0729 10:52:18.689722    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:52:23.691848    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:52:23.692072    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:52:23.715962    8229 logs.go:276] 2 containers: [8c4ad5249bc8 90622bb860e2]
	I0729 10:52:23.716058    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:52:23.730292    8229 logs.go:276] 2 containers: [c3e1f9023336 c4b3c8945276]
	I0729 10:52:23.730364    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:52:23.741650    8229 logs.go:276] 1 containers: [92f05bbf9ced]
	I0729 10:52:23.741712    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:52:23.752009    8229 logs.go:276] 2 containers: [7cc1c8aea7f7 6565a6abc140]
	I0729 10:52:23.752070    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:52:23.762550    8229 logs.go:276] 1 containers: [7243039f43b7]
	I0729 10:52:23.762621    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:52:23.773346    8229 logs.go:276] 2 containers: [4aa9b4b13ef3 87d43b7d580e]
	I0729 10:52:23.773410    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:52:23.788421    8229 logs.go:276] 0 containers: []
	W0729 10:52:23.788436    8229 logs.go:278] No container was found matching "kindnet"
	I0729 10:52:23.788495    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:52:23.800456    8229 logs.go:276] 1 containers: [fcf6defc29a4]
	I0729 10:52:23.800475    8229 logs.go:123] Gathering logs for kubelet ...
	I0729 10:52:23.800481    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:52:23.835508    8229 logs.go:123] Gathering logs for kube-apiserver [8c4ad5249bc8] ...
	I0729 10:52:23.835528    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c4ad5249bc8"
	I0729 10:52:23.850057    8229 logs.go:123] Gathering logs for kube-scheduler [7cc1c8aea7f7] ...
	I0729 10:52:23.850068    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc1c8aea7f7"
	I0729 10:52:23.861855    8229 logs.go:123] Gathering logs for kube-scheduler [6565a6abc140] ...
	I0729 10:52:23.861865    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6565a6abc140"
	I0729 10:52:23.872496    8229 logs.go:123] Gathering logs for storage-provisioner [fcf6defc29a4] ...
	I0729 10:52:23.872513    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcf6defc29a4"
	I0729 10:52:23.886799    8229 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:52:23.886810    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:52:23.921239    8229 logs.go:123] Gathering logs for coredns [92f05bbf9ced] ...
	I0729 10:52:23.921250    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92f05bbf9ced"
	I0729 10:52:23.932425    8229 logs.go:123] Gathering logs for kube-proxy [7243039f43b7] ...
	I0729 10:52:23.932436    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7243039f43b7"
	I0729 10:52:23.944060    8229 logs.go:123] Gathering logs for kube-controller-manager [4aa9b4b13ef3] ...
	I0729 10:52:23.944071    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4aa9b4b13ef3"
	I0729 10:52:23.963019    8229 logs.go:123] Gathering logs for kube-controller-manager [87d43b7d580e] ...
	I0729 10:52:23.963029    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87d43b7d580e"
	I0729 10:52:23.974326    8229 logs.go:123] Gathering logs for container status ...
	I0729 10:52:23.974339    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:52:23.987835    8229 logs.go:123] Gathering logs for dmesg ...
	I0729 10:52:23.987846    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:52:23.992681    8229 logs.go:123] Gathering logs for kube-apiserver [90622bb860e2] ...
	I0729 10:52:23.992688    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90622bb860e2"
	I0729 10:52:24.014657    8229 logs.go:123] Gathering logs for etcd [c3e1f9023336] ...
	I0729 10:52:24.014671    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3e1f9023336"
	I0729 10:52:24.028329    8229 logs.go:123] Gathering logs for etcd [c4b3c8945276] ...
	I0729 10:52:24.028353    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4b3c8945276"
	I0729 10:52:24.041648    8229 logs.go:123] Gathering logs for Docker ...
	I0729 10:52:24.041661    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:52:26.566731    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:52:31.568905    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:52:31.569172    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:52:31.594581    8229 logs.go:276] 2 containers: [8c4ad5249bc8 90622bb860e2]
	I0729 10:52:31.594684    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:52:31.612142    8229 logs.go:276] 2 containers: [c3e1f9023336 c4b3c8945276]
	I0729 10:52:31.612232    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:52:31.625854    8229 logs.go:276] 1 containers: [92f05bbf9ced]
	I0729 10:52:31.625931    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:52:31.638553    8229 logs.go:276] 2 containers: [7cc1c8aea7f7 6565a6abc140]
	I0729 10:52:31.638630    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:52:31.649344    8229 logs.go:276] 1 containers: [7243039f43b7]
	I0729 10:52:31.649409    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:52:31.659593    8229 logs.go:276] 2 containers: [4aa9b4b13ef3 87d43b7d580e]
	I0729 10:52:31.659664    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:52:31.669778    8229 logs.go:276] 0 containers: []
	W0729 10:52:31.669791    8229 logs.go:278] No container was found matching "kindnet"
	I0729 10:52:31.669851    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:52:31.679938    8229 logs.go:276] 1 containers: [fcf6defc29a4]
	I0729 10:52:31.679957    8229 logs.go:123] Gathering logs for dmesg ...
	I0729 10:52:31.679963    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:52:31.685571    8229 logs.go:123] Gathering logs for kube-scheduler [6565a6abc140] ...
	I0729 10:52:31.685580    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6565a6abc140"
	I0729 10:52:31.697722    8229 logs.go:123] Gathering logs for kube-controller-manager [87d43b7d580e] ...
	I0729 10:52:31.697733    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87d43b7d580e"
	I0729 10:52:31.709623    8229 logs.go:123] Gathering logs for Docker ...
	I0729 10:52:31.709637    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:52:31.735341    8229 logs.go:123] Gathering logs for kubelet ...
	I0729 10:52:31.735350    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:52:31.768972    8229 logs.go:123] Gathering logs for kube-scheduler [7cc1c8aea7f7] ...
	I0729 10:52:31.768980    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc1c8aea7f7"
	I0729 10:52:31.780552    8229 logs.go:123] Gathering logs for storage-provisioner [fcf6defc29a4] ...
	I0729 10:52:31.780564    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcf6defc29a4"
	I0729 10:52:31.792391    8229 logs.go:123] Gathering logs for container status ...
	I0729 10:52:31.792404    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:52:31.804461    8229 logs.go:123] Gathering logs for etcd [c4b3c8945276] ...
	I0729 10:52:31.804472    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4b3c8945276"
	I0729 10:52:31.817785    8229 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:52:31.817797    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:52:31.856630    8229 logs.go:123] Gathering logs for kube-apiserver [8c4ad5249bc8] ...
	I0729 10:52:31.856642    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c4ad5249bc8"
	I0729 10:52:31.870624    8229 logs.go:123] Gathering logs for kube-apiserver [90622bb860e2] ...
	I0729 10:52:31.870637    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90622bb860e2"
	I0729 10:52:31.891376    8229 logs.go:123] Gathering logs for etcd [c3e1f9023336] ...
	I0729 10:52:31.891390    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3e1f9023336"
	I0729 10:52:31.907832    8229 logs.go:123] Gathering logs for coredns [92f05bbf9ced] ...
	I0729 10:52:31.907846    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92f05bbf9ced"
	I0729 10:52:31.919422    8229 logs.go:123] Gathering logs for kube-proxy [7243039f43b7] ...
	I0729 10:52:31.919434    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7243039f43b7"
	I0729 10:52:31.931365    8229 logs.go:123] Gathering logs for kube-controller-manager [4aa9b4b13ef3] ...
	I0729 10:52:31.931376    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4aa9b4b13ef3"
	I0729 10:52:34.454223    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:52:39.456540    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:52:39.456679    8229 kubeadm.go:597] duration metric: took 4m2.900966417s to restartPrimaryControlPlane
	W0729 10:52:39.456750    8229 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 10:52:39.456784    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0729 10:52:40.484341    8229 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.027558583s)
	I0729 10:52:40.484396    8229 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 10:52:40.489380    8229 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 10:52:40.492218    8229 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 10:52:40.494901    8229 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 10:52:40.494907    8229 kubeadm.go:157] found existing configuration files:
	
	I0729 10:52:40.494931    8229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51249 /etc/kubernetes/admin.conf
	I0729 10:52:40.497329    8229 kubeadm.go:163] "https://control-plane.minikube.internal:51249" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51249 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 10:52:40.497358    8229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 10:52:40.499887    8229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51249 /etc/kubernetes/kubelet.conf
	I0729 10:52:40.502635    8229 kubeadm.go:163] "https://control-plane.minikube.internal:51249" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51249 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 10:52:40.502659    8229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 10:52:40.505334    8229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51249 /etc/kubernetes/controller-manager.conf
	I0729 10:52:40.507850    8229 kubeadm.go:163] "https://control-plane.minikube.internal:51249" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51249 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 10:52:40.507869    8229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 10:52:40.510942    8229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51249 /etc/kubernetes/scheduler.conf
	I0729 10:52:40.513559    8229 kubeadm.go:163] "https://control-plane.minikube.internal:51249" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51249 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 10:52:40.513581    8229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 10:52:40.516100    8229 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 10:52:40.532877    8229 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0729 10:52:40.532905    8229 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 10:52:40.581948    8229 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 10:52:40.582014    8229 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 10:52:40.582079    8229 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 10:52:40.630856    8229 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 10:52:40.634971    8229 out.go:204]   - Generating certificates and keys ...
	I0729 10:52:40.635004    8229 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 10:52:40.635032    8229 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 10:52:40.635067    8229 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 10:52:40.635094    8229 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 10:52:40.635125    8229 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 10:52:40.635149    8229 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 10:52:40.635178    8229 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 10:52:40.635207    8229 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 10:52:40.635245    8229 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 10:52:40.635283    8229 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 10:52:40.635304    8229 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 10:52:40.635358    8229 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 10:52:40.862663    8229 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 10:52:41.077901    8229 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 10:52:41.184621    8229 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 10:52:41.291372    8229 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 10:52:41.320967    8229 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 10:52:41.321013    8229 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 10:52:41.321033    8229 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 10:52:41.393232    8229 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 10:52:41.397433    8229 out.go:204]   - Booting up control plane ...
	I0729 10:52:41.397488    8229 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 10:52:41.397533    8229 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 10:52:41.397575    8229 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 10:52:41.397620    8229 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 10:52:41.397707    8229 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 10:52:46.397819    8229 kubeadm.go:310] [apiclient] All control plane components are healthy after 5.001970 seconds
	I0729 10:52:46.397900    8229 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 10:52:46.402060    8229 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 10:52:46.911327    8229 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 10:52:46.911429    8229 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-504000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 10:52:47.419322    8229 kubeadm.go:310] [bootstrap-token] Using token: 65pupz.78o58rh3wlo636g0
	I0729 10:52:47.426467    8229 out.go:204]   - Configuring RBAC rules ...
	I0729 10:52:47.426586    8229 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 10:52:47.426679    8229 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 10:52:47.430553    8229 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 10:52:47.432182    8229 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 10:52:47.433715    8229 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 10:52:47.435409    8229 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 10:52:47.440332    8229 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 10:52:47.637058    8229 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 10:52:47.824362    8229 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 10:52:47.824795    8229 kubeadm.go:310] 
	I0729 10:52:47.824827    8229 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 10:52:47.824831    8229 kubeadm.go:310] 
	I0729 10:52:47.824876    8229 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 10:52:47.824882    8229 kubeadm.go:310] 
	I0729 10:52:47.824895    8229 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 10:52:47.824944    8229 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 10:52:47.824970    8229 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 10:52:47.824972    8229 kubeadm.go:310] 
	I0729 10:52:47.825013    8229 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 10:52:47.825016    8229 kubeadm.go:310] 
	I0729 10:52:47.825042    8229 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 10:52:47.825049    8229 kubeadm.go:310] 
	I0729 10:52:47.825073    8229 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 10:52:47.825108    8229 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 10:52:47.825150    8229 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 10:52:47.825155    8229 kubeadm.go:310] 
	I0729 10:52:47.825208    8229 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 10:52:47.825247    8229 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 10:52:47.825250    8229 kubeadm.go:310] 
	I0729 10:52:47.825315    8229 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 65pupz.78o58rh3wlo636g0 \
	I0729 10:52:47.825379    8229 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:8d6a503498cfac617ec351c4234f65718d8cbc12c41bd005a6931d270830028d \
	I0729 10:52:47.825399    8229 kubeadm.go:310] 	--control-plane 
	I0729 10:52:47.825401    8229 kubeadm.go:310] 
	I0729 10:52:47.825445    8229 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 10:52:47.825450    8229 kubeadm.go:310] 
	I0729 10:52:47.825498    8229 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 65pupz.78o58rh3wlo636g0 \
	I0729 10:52:47.825554    8229 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:8d6a503498cfac617ec351c4234f65718d8cbc12c41bd005a6931d270830028d 
	I0729 10:52:47.825645    8229 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 10:52:47.825654    8229 cni.go:84] Creating CNI manager for ""
	I0729 10:52:47.825663    8229 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 10:52:47.830570    8229 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 10:52:47.839575    8229 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 10:52:47.842773    8229 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 10:52:47.847872    8229 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 10:52:47.847931    8229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:52:47.847962    8229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-504000 minikube.k8s.io/updated_at=2024_07_29T10_52_47_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=de53afae5e8f4269438099f4dad14d93a8a17e35 minikube.k8s.io/name=running-upgrade-504000 minikube.k8s.io/primary=true
	I0729 10:52:47.886643    8229 ops.go:34] apiserver oom_adj: -16
	I0729 10:52:47.886762    8229 kubeadm.go:1113] duration metric: took 38.861042ms to wait for elevateKubeSystemPrivileges
	I0729 10:52:47.895855    8229 kubeadm.go:394] duration metric: took 4m11.389148542s to StartCluster
	I0729 10:52:47.895875    8229 settings.go:142] acquiring lock: {Name:mk3ce889c5cdf5c514cbf9155d52acf6d279a087 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:52:47.896034    8229 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19339-6071/kubeconfig
	I0729 10:52:47.896413    8229 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19339-6071/kubeconfig: {Name:mkf75fdff2d3e918223b7f2dbeb4359c01007a16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:52:47.896615    8229 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:52:47.896683    8229 config.go:182] Loaded profile config "running-upgrade-504000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 10:52:47.896720    8229 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 10:52:47.896760    8229 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-504000"
	I0729 10:52:47.896777    8229 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-504000"
	W0729 10:52:47.896780    8229 addons.go:243] addon storage-provisioner should already be in state true
	I0729 10:52:47.896778    8229 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-504000"
	I0729 10:52:47.896822    8229 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-504000"
	I0729 10:52:47.896792    8229 host.go:66] Checking if "running-upgrade-504000" exists ...
	I0729 10:52:47.897725    8229 kapi.go:59] client config for running-upgrade-504000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/running-upgrade-504000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/running-upgrade-504000/client.key", CAFile:"/Users/jenkins/minikube-integration/19339-6071/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x102264080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 10:52:47.897846    8229 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-504000"
	W0729 10:52:47.897850    8229 addons.go:243] addon default-storageclass should already be in state true
	I0729 10:52:47.897860    8229 host.go:66] Checking if "running-upgrade-504000" exists ...
	I0729 10:52:47.900526    8229 out.go:177] * Verifying Kubernetes components...
	I0729 10:52:47.900861    8229 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 10:52:47.903766    8229 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 10:52:47.903774    8229 sshutil.go:53] new ssh client: &{IP:localhost Port:51217 SSHKeyPath:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/running-upgrade-504000/id_rsa Username:docker}
	I0729 10:52:47.906482    8229 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 10:52:47.910558    8229 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 10:52:47.913489    8229 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 10:52:47.913495    8229 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 10:52:47.913500    8229 sshutil.go:53] new ssh client: &{IP:localhost Port:51217 SSHKeyPath:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/running-upgrade-504000/id_rsa Username:docker}
	I0729 10:52:47.989684    8229 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 10:52:47.994280    8229 api_server.go:52] waiting for apiserver process to appear ...
	I0729 10:52:47.994322    8229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 10:52:47.998177    8229 api_server.go:72] duration metric: took 101.552208ms to wait for apiserver process to appear ...
	I0729 10:52:47.998184    8229 api_server.go:88] waiting for apiserver healthz status ...
	I0729 10:52:47.998190    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:52:48.027776    8229 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 10:52:48.043356    8229 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 10:52:53.000265    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:52:53.000334    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:52:58.000827    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:52:58.000856    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:53:03.001236    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:53:03.001285    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:53:08.001742    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:53:08.001786    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:53:13.002465    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:53:13.002506    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:53:18.003331    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:53:18.003350    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0729 10:53:18.357052    8229 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0729 10:53:18.363124    8229 out.go:177] * Enabled addons: storage-provisioner
	I0729 10:53:18.371086    8229 addons.go:510] duration metric: took 30.474922292s for enable addons: enabled=[storage-provisioner]
	I0729 10:53:23.004382    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:53:23.004433    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:53:28.005832    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:53:28.005854    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:53:33.007572    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:53:33.007595    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:53:38.009719    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:53:38.009759    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:53:43.011922    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:53:43.011943    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:53:48.014053    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:53:48.014214    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:53:48.029811    8229 logs.go:276] 1 containers: [120daa333441]
	I0729 10:53:48.029890    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:53:48.047391    8229 logs.go:276] 1 containers: [6cfb0c541a62]
	I0729 10:53:48.047466    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:53:48.061631    8229 logs.go:276] 2 containers: [f179b7a6916f 74a37cb60d42]
	I0729 10:53:48.061702    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:53:48.072774    8229 logs.go:276] 1 containers: [60be90d0d8ea]
	I0729 10:53:48.072833    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:53:48.086111    8229 logs.go:276] 1 containers: [5a4490e00797]
	I0729 10:53:48.086180    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:53:48.102686    8229 logs.go:276] 1 containers: [a047283c1326]
	I0729 10:53:48.102747    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:53:48.113366    8229 logs.go:276] 0 containers: []
	W0729 10:53:48.113378    8229 logs.go:278] No container was found matching "kindnet"
	I0729 10:53:48.113435    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:53:48.125258    8229 logs.go:276] 1 containers: [585ed2b764f6]
	I0729 10:53:48.125275    8229 logs.go:123] Gathering logs for coredns [f179b7a6916f] ...
	I0729 10:53:48.125281    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f179b7a6916f"
	I0729 10:53:48.137275    8229 logs.go:123] Gathering logs for coredns [74a37cb60d42] ...
	I0729 10:53:48.137286    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74a37cb60d42"
	I0729 10:53:48.148602    8229 logs.go:123] Gathering logs for dmesg ...
	I0729 10:53:48.148613    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:53:48.153148    8229 logs.go:123] Gathering logs for kube-apiserver [120daa333441] ...
	I0729 10:53:48.153155    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 120daa333441"
	I0729 10:53:48.166404    8229 logs.go:123] Gathering logs for etcd [6cfb0c541a62] ...
	I0729 10:53:48.166415    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cfb0c541a62"
	I0729 10:53:48.179958    8229 logs.go:123] Gathering logs for kube-scheduler [60be90d0d8ea] ...
	I0729 10:53:48.179967    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60be90d0d8ea"
	I0729 10:53:48.194433    8229 logs.go:123] Gathering logs for kube-proxy [5a4490e00797] ...
	I0729 10:53:48.194442    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a4490e00797"
	I0729 10:53:48.208709    8229 logs.go:123] Gathering logs for kube-controller-manager [a047283c1326] ...
	I0729 10:53:48.208720    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a047283c1326"
	I0729 10:53:48.225797    8229 logs.go:123] Gathering logs for storage-provisioner [585ed2b764f6] ...
	I0729 10:53:48.225808    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585ed2b764f6"
	I0729 10:53:48.237746    8229 logs.go:123] Gathering logs for Docker ...
	I0729 10:53:48.237757    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:53:48.262584    8229 logs.go:123] Gathering logs for kubelet ...
	I0729 10:53:48.262593    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:53:48.299118    8229 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:53:48.299126    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:53:48.341110    8229 logs.go:123] Gathering logs for container status ...
	I0729 10:53:48.341122    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:53:50.854355    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:53:55.856493    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:53:55.856593    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:53:55.871509    8229 logs.go:276] 1 containers: [120daa333441]
	I0729 10:53:55.871590    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:53:55.883168    8229 logs.go:276] 1 containers: [6cfb0c541a62]
	I0729 10:53:55.883238    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:53:55.894282    8229 logs.go:276] 2 containers: [f179b7a6916f 74a37cb60d42]
	I0729 10:53:55.894354    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:53:55.904638    8229 logs.go:276] 1 containers: [60be90d0d8ea]
	I0729 10:53:55.904709    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:53:55.916079    8229 logs.go:276] 1 containers: [5a4490e00797]
	I0729 10:53:55.916143    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:53:55.926575    8229 logs.go:276] 1 containers: [a047283c1326]
	I0729 10:53:55.926639    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:53:55.936748    8229 logs.go:276] 0 containers: []
	W0729 10:53:55.936759    8229 logs.go:278] No container was found matching "kindnet"
	I0729 10:53:55.936812    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:53:55.953121    8229 logs.go:276] 1 containers: [585ed2b764f6]
	I0729 10:53:55.953138    8229 logs.go:123] Gathering logs for etcd [6cfb0c541a62] ...
	I0729 10:53:55.953144    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cfb0c541a62"
	I0729 10:53:55.966829    8229 logs.go:123] Gathering logs for coredns [f179b7a6916f] ...
	I0729 10:53:55.966842    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f179b7a6916f"
	I0729 10:53:55.980737    8229 logs.go:123] Gathering logs for kube-scheduler [60be90d0d8ea] ...
	I0729 10:53:55.980747    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60be90d0d8ea"
	I0729 10:53:55.995690    8229 logs.go:123] Gathering logs for kube-controller-manager [a047283c1326] ...
	I0729 10:53:55.995701    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a047283c1326"
	I0729 10:53:56.014959    8229 logs.go:123] Gathering logs for Docker ...
	I0729 10:53:56.014968    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:53:56.038803    8229 logs.go:123] Gathering logs for dmesg ...
	I0729 10:53:56.038811    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:53:56.043222    8229 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:53:56.043231    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:53:56.079179    8229 logs.go:123] Gathering logs for coredns [74a37cb60d42] ...
	I0729 10:53:56.079191    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74a37cb60d42"
	I0729 10:53:56.091070    8229 logs.go:123] Gathering logs for kube-proxy [5a4490e00797] ...
	I0729 10:53:56.091081    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a4490e00797"
	I0729 10:53:56.111021    8229 logs.go:123] Gathering logs for storage-provisioner [585ed2b764f6] ...
	I0729 10:53:56.111032    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585ed2b764f6"
	I0729 10:53:56.122819    8229 logs.go:123] Gathering logs for container status ...
	I0729 10:53:56.122829    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:53:56.134627    8229 logs.go:123] Gathering logs for kubelet ...
	I0729 10:53:56.134642    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:53:56.169948    8229 logs.go:123] Gathering logs for kube-apiserver [120daa333441] ...
	I0729 10:53:56.169959    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 120daa333441"
	I0729 10:53:58.686896    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:54:03.689143    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:54:03.689406    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:54:03.722259    8229 logs.go:276] 1 containers: [120daa333441]
	I0729 10:54:03.722361    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:54:03.737097    8229 logs.go:276] 1 containers: [6cfb0c541a62]
	I0729 10:54:03.737175    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:54:03.750139    8229 logs.go:276] 2 containers: [f179b7a6916f 74a37cb60d42]
	I0729 10:54:03.750211    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:54:03.761060    8229 logs.go:276] 1 containers: [60be90d0d8ea]
	I0729 10:54:03.761134    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:54:03.771312    8229 logs.go:276] 1 containers: [5a4490e00797]
	I0729 10:54:03.771384    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:54:03.781389    8229 logs.go:276] 1 containers: [a047283c1326]
	I0729 10:54:03.781455    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:54:03.791739    8229 logs.go:276] 0 containers: []
	W0729 10:54:03.791750    8229 logs.go:278] No container was found matching "kindnet"
	I0729 10:54:03.791807    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:54:03.802502    8229 logs.go:276] 1 containers: [585ed2b764f6]
	I0729 10:54:03.802516    8229 logs.go:123] Gathering logs for container status ...
	I0729 10:54:03.802522    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:54:03.814697    8229 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:54:03.814711    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:54:03.851210    8229 logs.go:123] Gathering logs for coredns [f179b7a6916f] ...
	I0729 10:54:03.851225    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f179b7a6916f"
	I0729 10:54:03.868083    8229 logs.go:123] Gathering logs for coredns [74a37cb60d42] ...
	I0729 10:54:03.868094    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74a37cb60d42"
	I0729 10:54:03.880638    8229 logs.go:123] Gathering logs for kube-scheduler [60be90d0d8ea] ...
	I0729 10:54:03.880652    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60be90d0d8ea"
	I0729 10:54:03.895132    8229 logs.go:123] Gathering logs for kube-proxy [5a4490e00797] ...
	I0729 10:54:03.895145    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a4490e00797"
	I0729 10:54:03.906717    8229 logs.go:123] Gathering logs for kube-controller-manager [a047283c1326] ...
	I0729 10:54:03.906730    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a047283c1326"
	I0729 10:54:03.924247    8229 logs.go:123] Gathering logs for storage-provisioner [585ed2b764f6] ...
	I0729 10:54:03.924261    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585ed2b764f6"
	I0729 10:54:03.935617    8229 logs.go:123] Gathering logs for Docker ...
	I0729 10:54:03.935626    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:54:03.959404    8229 logs.go:123] Gathering logs for kubelet ...
	I0729 10:54:03.959416    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:54:03.995679    8229 logs.go:123] Gathering logs for dmesg ...
	I0729 10:54:03.995687    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:54:04.000390    8229 logs.go:123] Gathering logs for kube-apiserver [120daa333441] ...
	I0729 10:54:04.000397    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 120daa333441"
	I0729 10:54:04.014698    8229 logs.go:123] Gathering logs for etcd [6cfb0c541a62] ...
	I0729 10:54:04.014710    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cfb0c541a62"
	I0729 10:54:06.530091    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:54:11.532336    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:54:11.532686    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:54:11.573033    8229 logs.go:276] 1 containers: [120daa333441]
	I0729 10:54:11.573176    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:54:11.593716    8229 logs.go:276] 1 containers: [6cfb0c541a62]
	I0729 10:54:11.593806    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:54:11.609962    8229 logs.go:276] 2 containers: [f179b7a6916f 74a37cb60d42]
	I0729 10:54:11.610034    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:54:11.622161    8229 logs.go:276] 1 containers: [60be90d0d8ea]
	I0729 10:54:11.622235    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:54:11.633350    8229 logs.go:276] 1 containers: [5a4490e00797]
	I0729 10:54:11.633427    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:54:11.644026    8229 logs.go:276] 1 containers: [a047283c1326]
	I0729 10:54:11.644086    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:54:11.654214    8229 logs.go:276] 0 containers: []
	W0729 10:54:11.654224    8229 logs.go:278] No container was found matching "kindnet"
	I0729 10:54:11.654283    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:54:11.664437    8229 logs.go:276] 1 containers: [585ed2b764f6]
	I0729 10:54:11.664453    8229 logs.go:123] Gathering logs for dmesg ...
	I0729 10:54:11.664458    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:54:11.668936    8229 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:54:11.668943    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:54:11.702916    8229 logs.go:123] Gathering logs for coredns [f179b7a6916f] ...
	I0729 10:54:11.702930    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f179b7a6916f"
	I0729 10:54:11.715274    8229 logs.go:123] Gathering logs for kube-controller-manager [a047283c1326] ...
	I0729 10:54:11.715285    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a047283c1326"
	I0729 10:54:11.738260    8229 logs.go:123] Gathering logs for storage-provisioner [585ed2b764f6] ...
	I0729 10:54:11.738274    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585ed2b764f6"
	I0729 10:54:11.750006    8229 logs.go:123] Gathering logs for container status ...
	I0729 10:54:11.750020    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:54:11.761899    8229 logs.go:123] Gathering logs for kubelet ...
	I0729 10:54:11.761911    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:54:11.797325    8229 logs.go:123] Gathering logs for kube-apiserver [120daa333441] ...
	I0729 10:54:11.797337    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 120daa333441"
	I0729 10:54:11.811925    8229 logs.go:123] Gathering logs for etcd [6cfb0c541a62] ...
	I0729 10:54:11.811933    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cfb0c541a62"
	I0729 10:54:11.830711    8229 logs.go:123] Gathering logs for coredns [74a37cb60d42] ...
	I0729 10:54:11.830721    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74a37cb60d42"
	I0729 10:54:11.842667    8229 logs.go:123] Gathering logs for kube-scheduler [60be90d0d8ea] ...
	I0729 10:54:11.842677    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60be90d0d8ea"
	I0729 10:54:11.856602    8229 logs.go:123] Gathering logs for kube-proxy [5a4490e00797] ...
	I0729 10:54:11.856612    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a4490e00797"
	I0729 10:54:11.868560    8229 logs.go:123] Gathering logs for Docker ...
	I0729 10:54:11.868571    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:54:14.396307    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:54:19.398492    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:54:19.398813    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:54:19.430555    8229 logs.go:276] 1 containers: [120daa333441]
	I0729 10:54:19.430678    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:54:19.448231    8229 logs.go:276] 1 containers: [6cfb0c541a62]
	I0729 10:54:19.448318    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:54:19.462319    8229 logs.go:276] 2 containers: [f179b7a6916f 74a37cb60d42]
	I0729 10:54:19.462388    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:54:19.475046    8229 logs.go:276] 1 containers: [60be90d0d8ea]
	I0729 10:54:19.475111    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:54:19.486017    8229 logs.go:276] 1 containers: [5a4490e00797]
	I0729 10:54:19.486090    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:54:19.502622    8229 logs.go:276] 1 containers: [a047283c1326]
	I0729 10:54:19.502701    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:54:19.513428    8229 logs.go:276] 0 containers: []
	W0729 10:54:19.513444    8229 logs.go:278] No container was found matching "kindnet"
	I0729 10:54:19.513498    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:54:19.526165    8229 logs.go:276] 1 containers: [585ed2b764f6]
	I0729 10:54:19.526188    8229 logs.go:123] Gathering logs for coredns [f179b7a6916f] ...
	I0729 10:54:19.526193    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f179b7a6916f"
	I0729 10:54:19.538216    8229 logs.go:123] Gathering logs for kube-scheduler [60be90d0d8ea] ...
	I0729 10:54:19.538229    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60be90d0d8ea"
	I0729 10:54:19.552441    8229 logs.go:123] Gathering logs for kube-proxy [5a4490e00797] ...
	I0729 10:54:19.552453    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a4490e00797"
	I0729 10:54:19.565192    8229 logs.go:123] Gathering logs for storage-provisioner [585ed2b764f6] ...
	I0729 10:54:19.565205    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585ed2b764f6"
	I0729 10:54:19.576993    8229 logs.go:123] Gathering logs for Docker ...
	I0729 10:54:19.577007    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:54:19.602023    8229 logs.go:123] Gathering logs for kubelet ...
	I0729 10:54:19.602035    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:54:19.640170    8229 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:54:19.640185    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:54:19.675266    8229 logs.go:123] Gathering logs for kube-apiserver [120daa333441] ...
	I0729 10:54:19.675282    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 120daa333441"
	I0729 10:54:19.689500    8229 logs.go:123] Gathering logs for kube-controller-manager [a047283c1326] ...
	I0729 10:54:19.689514    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a047283c1326"
	I0729 10:54:19.707066    8229 logs.go:123] Gathering logs for container status ...
	I0729 10:54:19.707084    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:54:19.721067    8229 logs.go:123] Gathering logs for dmesg ...
	I0729 10:54:19.721078    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:54:19.725679    8229 logs.go:123] Gathering logs for etcd [6cfb0c541a62] ...
	I0729 10:54:19.725688    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cfb0c541a62"
	I0729 10:54:19.739972    8229 logs.go:123] Gathering logs for coredns [74a37cb60d42] ...
	I0729 10:54:19.739986    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74a37cb60d42"
	I0729 10:54:22.254880    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:54:27.257018    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:54:27.257280    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:54:27.284493    8229 logs.go:276] 1 containers: [120daa333441]
	I0729 10:54:27.284620    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:54:27.301881    8229 logs.go:276] 1 containers: [6cfb0c541a62]
	I0729 10:54:27.301956    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:54:27.315368    8229 logs.go:276] 2 containers: [f179b7a6916f 74a37cb60d42]
	I0729 10:54:27.315446    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:54:27.327098    8229 logs.go:276] 1 containers: [60be90d0d8ea]
	I0729 10:54:27.327163    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:54:27.337805    8229 logs.go:276] 1 containers: [5a4490e00797]
	I0729 10:54:27.337875    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:54:27.348307    8229 logs.go:276] 1 containers: [a047283c1326]
	I0729 10:54:27.348373    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:54:27.361083    8229 logs.go:276] 0 containers: []
	W0729 10:54:27.361094    8229 logs.go:278] No container was found matching "kindnet"
	I0729 10:54:27.361153    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:54:27.373810    8229 logs.go:276] 1 containers: [585ed2b764f6]
	I0729 10:54:27.373825    8229 logs.go:123] Gathering logs for kubelet ...
	I0729 10:54:27.373831    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:54:27.412457    8229 logs.go:123] Gathering logs for dmesg ...
	I0729 10:54:27.412471    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:54:27.417099    8229 logs.go:123] Gathering logs for coredns [74a37cb60d42] ...
	I0729 10:54:27.417106    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74a37cb60d42"
	I0729 10:54:27.429085    8229 logs.go:123] Gathering logs for kube-scheduler [60be90d0d8ea] ...
	I0729 10:54:27.429096    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60be90d0d8ea"
	I0729 10:54:27.444665    8229 logs.go:123] Gathering logs for storage-provisioner [585ed2b764f6] ...
	I0729 10:54:27.444678    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585ed2b764f6"
	I0729 10:54:27.456284    8229 logs.go:123] Gathering logs for kube-controller-manager [a047283c1326] ...
	I0729 10:54:27.456296    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a047283c1326"
	I0729 10:54:27.473292    8229 logs.go:123] Gathering logs for Docker ...
	I0729 10:54:27.473301    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:54:27.496682    8229 logs.go:123] Gathering logs for container status ...
	I0729 10:54:27.496689    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:54:27.507998    8229 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:54:27.508008    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:54:27.545552    8229 logs.go:123] Gathering logs for kube-apiserver [120daa333441] ...
	I0729 10:54:27.545562    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 120daa333441"
	I0729 10:54:27.568158    8229 logs.go:123] Gathering logs for etcd [6cfb0c541a62] ...
	I0729 10:54:27.568167    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cfb0c541a62"
	I0729 10:54:27.582739    8229 logs.go:123] Gathering logs for coredns [f179b7a6916f] ...
	I0729 10:54:27.582753    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f179b7a6916f"
	I0729 10:54:27.594653    8229 logs.go:123] Gathering logs for kube-proxy [5a4490e00797] ...
	I0729 10:54:27.594664    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a4490e00797"
	I0729 10:54:30.108211    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:54:35.110341    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:54:35.110540    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:54:35.132822    8229 logs.go:276] 1 containers: [120daa333441]
	I0729 10:54:35.132941    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:54:35.148171    8229 logs.go:276] 1 containers: [6cfb0c541a62]
	I0729 10:54:35.148237    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:54:35.161214    8229 logs.go:276] 2 containers: [f179b7a6916f 74a37cb60d42]
	I0729 10:54:35.161292    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:54:35.172278    8229 logs.go:276] 1 containers: [60be90d0d8ea]
	I0729 10:54:35.172350    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:54:35.183022    8229 logs.go:276] 1 containers: [5a4490e00797]
	I0729 10:54:35.183090    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:54:35.193637    8229 logs.go:276] 1 containers: [a047283c1326]
	I0729 10:54:35.193699    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:54:35.203364    8229 logs.go:276] 0 containers: []
	W0729 10:54:35.203376    8229 logs.go:278] No container was found matching "kindnet"
	I0729 10:54:35.203433    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:54:35.213841    8229 logs.go:276] 1 containers: [585ed2b764f6]
	I0729 10:54:35.213857    8229 logs.go:123] Gathering logs for kubelet ...
	I0729 10:54:35.213863    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:54:35.252566    8229 logs.go:123] Gathering logs for dmesg ...
	I0729 10:54:35.252577    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:54:35.257006    8229 logs.go:123] Gathering logs for coredns [f179b7a6916f] ...
	I0729 10:54:35.257014    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f179b7a6916f"
	I0729 10:54:35.269710    8229 logs.go:123] Gathering logs for kube-controller-manager [a047283c1326] ...
	I0729 10:54:35.269721    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a047283c1326"
	I0729 10:54:35.293602    8229 logs.go:123] Gathering logs for Docker ...
	I0729 10:54:35.293611    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:54:35.319417    8229 logs.go:123] Gathering logs for kube-proxy [5a4490e00797] ...
	I0729 10:54:35.319428    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a4490e00797"
	I0729 10:54:35.331491    8229 logs.go:123] Gathering logs for storage-provisioner [585ed2b764f6] ...
	I0729 10:54:35.331500    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585ed2b764f6"
	I0729 10:54:35.343025    8229 logs.go:123] Gathering logs for container status ...
	I0729 10:54:35.343036    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:54:35.354693    8229 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:54:35.354703    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:54:35.390511    8229 logs.go:123] Gathering logs for kube-apiserver [120daa333441] ...
	I0729 10:54:35.390525    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 120daa333441"
	I0729 10:54:35.405088    8229 logs.go:123] Gathering logs for etcd [6cfb0c541a62] ...
	I0729 10:54:35.405098    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cfb0c541a62"
	I0729 10:54:35.430898    8229 logs.go:123] Gathering logs for coredns [74a37cb60d42] ...
	I0729 10:54:35.430909    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74a37cb60d42"
	I0729 10:54:35.443819    8229 logs.go:123] Gathering logs for kube-scheduler [60be90d0d8ea] ...
	I0729 10:54:35.443833    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60be90d0d8ea"
	I0729 10:54:37.961099    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:54:42.963287    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:54:42.963445    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:54:42.976943    8229 logs.go:276] 1 containers: [120daa333441]
	I0729 10:54:42.977013    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:54:42.988070    8229 logs.go:276] 1 containers: [6cfb0c541a62]
	I0729 10:54:42.988144    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:54:42.998520    8229 logs.go:276] 2 containers: [f179b7a6916f 74a37cb60d42]
	I0729 10:54:42.998582    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:54:43.009084    8229 logs.go:276] 1 containers: [60be90d0d8ea]
	I0729 10:54:43.009149    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:54:43.019677    8229 logs.go:276] 1 containers: [5a4490e00797]
	I0729 10:54:43.019750    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:54:43.030319    8229 logs.go:276] 1 containers: [a047283c1326]
	I0729 10:54:43.030404    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:54:43.040979    8229 logs.go:276] 0 containers: []
	W0729 10:54:43.040990    8229 logs.go:278] No container was found matching "kindnet"
	I0729 10:54:43.041049    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:54:43.051230    8229 logs.go:276] 1 containers: [585ed2b764f6]
	I0729 10:54:43.051245    8229 logs.go:123] Gathering logs for dmesg ...
	I0729 10:54:43.051252    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:54:43.056394    8229 logs.go:123] Gathering logs for etcd [6cfb0c541a62] ...
	I0729 10:54:43.056401    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cfb0c541a62"
	I0729 10:54:43.070235    8229 logs.go:123] Gathering logs for coredns [f179b7a6916f] ...
	I0729 10:54:43.070245    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f179b7a6916f"
	I0729 10:54:43.081680    8229 logs.go:123] Gathering logs for coredns [74a37cb60d42] ...
	I0729 10:54:43.081691    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74a37cb60d42"
	I0729 10:54:43.093229    8229 logs.go:123] Gathering logs for kube-scheduler [60be90d0d8ea] ...
	I0729 10:54:43.093240    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60be90d0d8ea"
	I0729 10:54:43.108075    8229 logs.go:123] Gathering logs for kube-proxy [5a4490e00797] ...
	I0729 10:54:43.108086    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a4490e00797"
	I0729 10:54:43.119747    8229 logs.go:123] Gathering logs for container status ...
	I0729 10:54:43.119758    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:54:43.139073    8229 logs.go:123] Gathering logs for kubelet ...
	I0729 10:54:43.139084    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:54:43.176108    8229 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:54:43.176120    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:54:43.212641    8229 logs.go:123] Gathering logs for kube-apiserver [120daa333441] ...
	I0729 10:54:43.212652    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 120daa333441"
	I0729 10:54:43.232272    8229 logs.go:123] Gathering logs for kube-controller-manager [a047283c1326] ...
	I0729 10:54:43.232281    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a047283c1326"
	I0729 10:54:43.252455    8229 logs.go:123] Gathering logs for storage-provisioner [585ed2b764f6] ...
	I0729 10:54:43.252467    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585ed2b764f6"
	I0729 10:54:43.264710    8229 logs.go:123] Gathering logs for Docker ...
	I0729 10:54:43.264721    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:54:45.789750    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:54:50.791839    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:54:50.792006    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:54:50.809115    8229 logs.go:276] 1 containers: [120daa333441]
	I0729 10:54:50.809208    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:54:50.822153    8229 logs.go:276] 1 containers: [6cfb0c541a62]
	I0729 10:54:50.822230    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:54:50.834124    8229 logs.go:276] 4 containers: [571220e0392b 19d652647dcb f179b7a6916f 74a37cb60d42]
	I0729 10:54:50.834201    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:54:50.844514    8229 logs.go:276] 1 containers: [60be90d0d8ea]
	I0729 10:54:50.844581    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:54:50.854858    8229 logs.go:276] 1 containers: [5a4490e00797]
	I0729 10:54:50.854925    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:54:50.865288    8229 logs.go:276] 1 containers: [a047283c1326]
	I0729 10:54:50.865357    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:54:50.875797    8229 logs.go:276] 0 containers: []
	W0729 10:54:50.875809    8229 logs.go:278] No container was found matching "kindnet"
	I0729 10:54:50.875866    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:54:50.886715    8229 logs.go:276] 1 containers: [585ed2b764f6]
	I0729 10:54:50.886734    8229 logs.go:123] Gathering logs for storage-provisioner [585ed2b764f6] ...
	I0729 10:54:50.886740    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585ed2b764f6"
	I0729 10:54:50.898487    8229 logs.go:123] Gathering logs for dmesg ...
	I0729 10:54:50.898499    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:54:50.903556    8229 logs.go:123] Gathering logs for coredns [74a37cb60d42] ...
	I0729 10:54:50.903565    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74a37cb60d42"
	I0729 10:54:50.915495    8229 logs.go:123] Gathering logs for kube-scheduler [60be90d0d8ea] ...
	I0729 10:54:50.915508    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60be90d0d8ea"
	I0729 10:54:50.929650    8229 logs.go:123] Gathering logs for kube-apiserver [120daa333441] ...
	I0729 10:54:50.929659    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 120daa333441"
	I0729 10:54:50.944299    8229 logs.go:123] Gathering logs for coredns [f179b7a6916f] ...
	I0729 10:54:50.944311    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f179b7a6916f"
	I0729 10:54:50.956126    8229 logs.go:123] Gathering logs for kube-proxy [5a4490e00797] ...
	I0729 10:54:50.956139    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a4490e00797"
	I0729 10:54:50.967761    8229 logs.go:123] Gathering logs for kube-controller-manager [a047283c1326] ...
	I0729 10:54:50.967776    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a047283c1326"
	I0729 10:54:50.985362    8229 logs.go:123] Gathering logs for Docker ...
	I0729 10:54:50.985375    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:54:51.010499    8229 logs.go:123] Gathering logs for container status ...
	I0729 10:54:51.010514    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:54:51.022337    8229 logs.go:123] Gathering logs for kubelet ...
	I0729 10:54:51.022349    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:54:51.059695    8229 logs.go:123] Gathering logs for coredns [571220e0392b] ...
	I0729 10:54:51.059706    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571220e0392b"
	I0729 10:54:51.071357    8229 logs.go:123] Gathering logs for coredns [19d652647dcb] ...
	I0729 10:54:51.071368    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19d652647dcb"
	I0729 10:54:51.082319    8229 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:54:51.082338    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:54:51.119787    8229 logs.go:123] Gathering logs for etcd [6cfb0c541a62] ...
	I0729 10:54:51.119800    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cfb0c541a62"
	I0729 10:54:53.636613    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:54:58.636988    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:54:58.637139    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:54:58.657024    8229 logs.go:276] 1 containers: [120daa333441]
	I0729 10:54:58.657099    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:54:58.669636    8229 logs.go:276] 1 containers: [6cfb0c541a62]
	I0729 10:54:58.669711    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:54:58.681033    8229 logs.go:276] 4 containers: [571220e0392b 19d652647dcb f179b7a6916f 74a37cb60d42]
	I0729 10:54:58.681106    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:54:58.691916    8229 logs.go:276] 1 containers: [60be90d0d8ea]
	I0729 10:54:58.691989    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:54:58.706482    8229 logs.go:276] 1 containers: [5a4490e00797]
	I0729 10:54:58.706551    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:54:58.717028    8229 logs.go:276] 1 containers: [a047283c1326]
	I0729 10:54:58.717102    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:54:58.727390    8229 logs.go:276] 0 containers: []
	W0729 10:54:58.727402    8229 logs.go:278] No container was found matching "kindnet"
	I0729 10:54:58.727456    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:54:58.743222    8229 logs.go:276] 1 containers: [585ed2b764f6]
	I0729 10:54:58.743239    8229 logs.go:123] Gathering logs for kubelet ...
	I0729 10:54:58.743245    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:54:58.780062    8229 logs.go:123] Gathering logs for kube-apiserver [120daa333441] ...
	I0729 10:54:58.780073    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 120daa333441"
	I0729 10:54:58.794694    8229 logs.go:123] Gathering logs for etcd [6cfb0c541a62] ...
	I0729 10:54:58.794706    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cfb0c541a62"
	I0729 10:54:58.809049    8229 logs.go:123] Gathering logs for kube-scheduler [60be90d0d8ea] ...
	I0729 10:54:58.809059    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60be90d0d8ea"
	I0729 10:54:58.824038    8229 logs.go:123] Gathering logs for kube-proxy [5a4490e00797] ...
	I0729 10:54:58.824051    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a4490e00797"
	I0729 10:54:58.839633    8229 logs.go:123] Gathering logs for coredns [f179b7a6916f] ...
	I0729 10:54:58.839645    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f179b7a6916f"
	I0729 10:54:58.852359    8229 logs.go:123] Gathering logs for kube-controller-manager [a047283c1326] ...
	I0729 10:54:58.852372    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a047283c1326"
	I0729 10:54:58.872131    8229 logs.go:123] Gathering logs for dmesg ...
	I0729 10:54:58.872140    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:54:58.876696    8229 logs.go:123] Gathering logs for coredns [571220e0392b] ...
	I0729 10:54:58.876703    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571220e0392b"
	I0729 10:54:58.888713    8229 logs.go:123] Gathering logs for coredns [74a37cb60d42] ...
	I0729 10:54:58.888724    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74a37cb60d42"
	I0729 10:54:58.900629    8229 logs.go:123] Gathering logs for Docker ...
	I0729 10:54:58.900642    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:54:58.924492    8229 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:54:58.924500    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:54:58.974656    8229 logs.go:123] Gathering logs for coredns [19d652647dcb] ...
	I0729 10:54:58.974668    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19d652647dcb"
	I0729 10:54:58.986576    8229 logs.go:123] Gathering logs for storage-provisioner [585ed2b764f6] ...
	I0729 10:54:58.986586    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585ed2b764f6"
	I0729 10:54:58.999222    8229 logs.go:123] Gathering logs for container status ...
	I0729 10:54:58.999232    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:55:01.513690    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:55:06.515926    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:55:06.516126    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:55:06.540726    8229 logs.go:276] 1 containers: [120daa333441]
	I0729 10:55:06.540837    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:55:06.557642    8229 logs.go:276] 1 containers: [6cfb0c541a62]
	I0729 10:55:06.557720    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:55:06.570618    8229 logs.go:276] 4 containers: [571220e0392b 19d652647dcb f179b7a6916f 74a37cb60d42]
	I0729 10:55:06.570691    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:55:06.581783    8229 logs.go:276] 1 containers: [60be90d0d8ea]
	I0729 10:55:06.581853    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:55:06.592169    8229 logs.go:276] 1 containers: [5a4490e00797]
	I0729 10:55:06.592231    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:55:06.602496    8229 logs.go:276] 1 containers: [a047283c1326]
	I0729 10:55:06.602561    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:55:06.614762    8229 logs.go:276] 0 containers: []
	W0729 10:55:06.614777    8229 logs.go:278] No container was found matching "kindnet"
	I0729 10:55:06.614835    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:55:06.630792    8229 logs.go:276] 1 containers: [585ed2b764f6]
	I0729 10:55:06.630809    8229 logs.go:123] Gathering logs for kubelet ...
	I0729 10:55:06.630816    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:55:06.668652    8229 logs.go:123] Gathering logs for coredns [74a37cb60d42] ...
	I0729 10:55:06.668663    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74a37cb60d42"
	I0729 10:55:06.683423    8229 logs.go:123] Gathering logs for kube-controller-manager [a047283c1326] ...
	I0729 10:55:06.683436    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a047283c1326"
	I0729 10:55:06.705832    8229 logs.go:123] Gathering logs for Docker ...
	I0729 10:55:06.705843    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:55:06.732151    8229 logs.go:123] Gathering logs for dmesg ...
	I0729 10:55:06.732160    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:55:06.736844    8229 logs.go:123] Gathering logs for kube-apiserver [120daa333441] ...
	I0729 10:55:06.736851    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 120daa333441"
	I0729 10:55:06.756441    8229 logs.go:123] Gathering logs for etcd [6cfb0c541a62] ...
	I0729 10:55:06.756450    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cfb0c541a62"
	I0729 10:55:06.770875    8229 logs.go:123] Gathering logs for coredns [571220e0392b] ...
	I0729 10:55:06.770885    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571220e0392b"
	I0729 10:55:06.782433    8229 logs.go:123] Gathering logs for coredns [f179b7a6916f] ...
	I0729 10:55:06.782443    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f179b7a6916f"
	I0729 10:55:06.794419    8229 logs.go:123] Gathering logs for kube-scheduler [60be90d0d8ea] ...
	I0729 10:55:06.794430    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60be90d0d8ea"
	I0729 10:55:06.808835    8229 logs.go:123] Gathering logs for kube-proxy [5a4490e00797] ...
	I0729 10:55:06.808845    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a4490e00797"
	I0729 10:55:06.820967    8229 logs.go:123] Gathering logs for container status ...
	I0729 10:55:06.820978    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:55:06.832495    8229 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:55:06.832505    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:55:06.867469    8229 logs.go:123] Gathering logs for coredns [19d652647dcb] ...
	I0729 10:55:06.867482    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19d652647dcb"
	I0729 10:55:06.878603    8229 logs.go:123] Gathering logs for storage-provisioner [585ed2b764f6] ...
	I0729 10:55:06.878616    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585ed2b764f6"
	I0729 10:55:09.392438    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:55:14.394977    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:55:14.395201    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:55:14.419166    8229 logs.go:276] 1 containers: [120daa333441]
	I0729 10:55:14.419291    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:55:14.436240    8229 logs.go:276] 1 containers: [6cfb0c541a62]
	I0729 10:55:14.436318    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:55:14.449563    8229 logs.go:276] 4 containers: [571220e0392b 19d652647dcb f179b7a6916f 74a37cb60d42]
	I0729 10:55:14.449642    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:55:14.460679    8229 logs.go:276] 1 containers: [60be90d0d8ea]
	I0729 10:55:14.460750    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:55:14.471330    8229 logs.go:276] 1 containers: [5a4490e00797]
	I0729 10:55:14.471397    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:55:14.482537    8229 logs.go:276] 1 containers: [a047283c1326]
	I0729 10:55:14.482606    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:55:14.497148    8229 logs.go:276] 0 containers: []
	W0729 10:55:14.497160    8229 logs.go:278] No container was found matching "kindnet"
	I0729 10:55:14.497220    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:55:14.507970    8229 logs.go:276] 1 containers: [585ed2b764f6]
	I0729 10:55:14.507986    8229 logs.go:123] Gathering logs for coredns [f179b7a6916f] ...
	I0729 10:55:14.507991    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f179b7a6916f"
	I0729 10:55:14.519386    8229 logs.go:123] Gathering logs for coredns [74a37cb60d42] ...
	I0729 10:55:14.519401    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74a37cb60d42"
	I0729 10:55:14.531183    8229 logs.go:123] Gathering logs for etcd [6cfb0c541a62] ...
	I0729 10:55:14.531193    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cfb0c541a62"
	I0729 10:55:14.544763    8229 logs.go:123] Gathering logs for Docker ...
	I0729 10:55:14.544774    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:55:14.570177    8229 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:55:14.570185    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:55:14.604384    8229 logs.go:123] Gathering logs for kube-controller-manager [a047283c1326] ...
	I0729 10:55:14.604398    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a047283c1326"
	I0729 10:55:14.621918    8229 logs.go:123] Gathering logs for storage-provisioner [585ed2b764f6] ...
	I0729 10:55:14.621928    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585ed2b764f6"
	I0729 10:55:14.632984    8229 logs.go:123] Gathering logs for container status ...
	I0729 10:55:14.632995    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:55:14.644507    8229 logs.go:123] Gathering logs for kubelet ...
	I0729 10:55:14.644517    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:55:14.682140    8229 logs.go:123] Gathering logs for dmesg ...
	I0729 10:55:14.682151    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:55:14.693452    8229 logs.go:123] Gathering logs for kube-apiserver [120daa333441] ...
	I0729 10:55:14.693468    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 120daa333441"
	I0729 10:55:14.718910    8229 logs.go:123] Gathering logs for coredns [571220e0392b] ...
	I0729 10:55:14.718923    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571220e0392b"
	I0729 10:55:14.730665    8229 logs.go:123] Gathering logs for coredns [19d652647dcb] ...
	I0729 10:55:14.730676    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19d652647dcb"
	I0729 10:55:14.742632    8229 logs.go:123] Gathering logs for kube-scheduler [60be90d0d8ea] ...
	I0729 10:55:14.742646    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60be90d0d8ea"
	I0729 10:55:14.757282    8229 logs.go:123] Gathering logs for kube-proxy [5a4490e00797] ...
	I0729 10:55:14.757291    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a4490e00797"
	I0729 10:55:17.269920    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:55:22.272214    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:55:22.272428    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:55:22.300028    8229 logs.go:276] 1 containers: [120daa333441]
	I0729 10:55:22.300130    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:55:22.315112    8229 logs.go:276] 1 containers: [6cfb0c541a62]
	I0729 10:55:22.315192    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:55:22.328001    8229 logs.go:276] 4 containers: [571220e0392b 19d652647dcb f179b7a6916f 74a37cb60d42]
	I0729 10:55:22.328076    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:55:22.339302    8229 logs.go:276] 1 containers: [60be90d0d8ea]
	I0729 10:55:22.339374    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:55:22.350754    8229 logs.go:276] 1 containers: [5a4490e00797]
	I0729 10:55:22.350826    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:55:22.362379    8229 logs.go:276] 1 containers: [a047283c1326]
	I0729 10:55:22.362446    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:55:22.373093    8229 logs.go:276] 0 containers: []
	W0729 10:55:22.373104    8229 logs.go:278] No container was found matching "kindnet"
	I0729 10:55:22.373165    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:55:22.383436    8229 logs.go:276] 1 containers: [585ed2b764f6]
	I0729 10:55:22.383450    8229 logs.go:123] Gathering logs for coredns [571220e0392b] ...
	I0729 10:55:22.383455    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571220e0392b"
	I0729 10:55:22.394808    8229 logs.go:123] Gathering logs for coredns [74a37cb60d42] ...
	I0729 10:55:22.394818    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74a37cb60d42"
	I0729 10:55:22.406482    8229 logs.go:123] Gathering logs for kube-proxy [5a4490e00797] ...
	I0729 10:55:22.406492    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a4490e00797"
	I0729 10:55:22.418188    8229 logs.go:123] Gathering logs for container status ...
	I0729 10:55:22.418198    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:55:22.430158    8229 logs.go:123] Gathering logs for kubelet ...
	I0729 10:55:22.430171    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:55:22.465565    8229 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:55:22.465575    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:55:22.499649    8229 logs.go:123] Gathering logs for coredns [f179b7a6916f] ...
	I0729 10:55:22.499660    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f179b7a6916f"
	I0729 10:55:22.511892    8229 logs.go:123] Gathering logs for Docker ...
	I0729 10:55:22.511903    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:55:22.537501    8229 logs.go:123] Gathering logs for dmesg ...
	I0729 10:55:22.537510    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:55:22.541938    8229 logs.go:123] Gathering logs for kube-apiserver [120daa333441] ...
	I0729 10:55:22.541944    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 120daa333441"
	I0729 10:55:22.556840    8229 logs.go:123] Gathering logs for coredns [19d652647dcb] ...
	I0729 10:55:22.556849    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19d652647dcb"
	I0729 10:55:22.568386    8229 logs.go:123] Gathering logs for etcd [6cfb0c541a62] ...
	I0729 10:55:22.568397    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cfb0c541a62"
	I0729 10:55:22.582646    8229 logs.go:123] Gathering logs for kube-scheduler [60be90d0d8ea] ...
	I0729 10:55:22.582654    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60be90d0d8ea"
	I0729 10:55:22.597002    8229 logs.go:123] Gathering logs for kube-controller-manager [a047283c1326] ...
	I0729 10:55:22.597017    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a047283c1326"
	I0729 10:55:22.617115    8229 logs.go:123] Gathering logs for storage-provisioner [585ed2b764f6] ...
	I0729 10:55:22.617125    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585ed2b764f6"
	I0729 10:55:25.129275    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:55:30.131441    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:55:30.131668    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:55:30.152052    8229 logs.go:276] 1 containers: [120daa333441]
	I0729 10:55:30.152166    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:55:30.167040    8229 logs.go:276] 1 containers: [6cfb0c541a62]
	I0729 10:55:30.167116    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:55:30.179471    8229 logs.go:276] 4 containers: [571220e0392b 19d652647dcb f179b7a6916f 74a37cb60d42]
	I0729 10:55:30.179544    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:55:30.190219    8229 logs.go:276] 1 containers: [60be90d0d8ea]
	I0729 10:55:30.190285    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:55:30.200805    8229 logs.go:276] 1 containers: [5a4490e00797]
	I0729 10:55:30.200875    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:55:30.211144    8229 logs.go:276] 1 containers: [a047283c1326]
	I0729 10:55:30.211211    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:55:30.221246    8229 logs.go:276] 0 containers: []
	W0729 10:55:30.221257    8229 logs.go:278] No container was found matching "kindnet"
	I0729 10:55:30.221319    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:55:30.232137    8229 logs.go:276] 1 containers: [585ed2b764f6]
	I0729 10:55:30.232155    8229 logs.go:123] Gathering logs for etcd [6cfb0c541a62] ...
	I0729 10:55:30.232160    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cfb0c541a62"
	I0729 10:55:30.250757    8229 logs.go:123] Gathering logs for coredns [19d652647dcb] ...
	I0729 10:55:30.250767    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19d652647dcb"
	I0729 10:55:30.262699    8229 logs.go:123] Gathering logs for kube-scheduler [60be90d0d8ea] ...
	I0729 10:55:30.262710    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60be90d0d8ea"
	I0729 10:55:30.276948    8229 logs.go:123] Gathering logs for kube-proxy [5a4490e00797] ...
	I0729 10:55:30.276959    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a4490e00797"
	I0729 10:55:30.288405    8229 logs.go:123] Gathering logs for storage-provisioner [585ed2b764f6] ...
	I0729 10:55:30.288416    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585ed2b764f6"
	I0729 10:55:30.300358    8229 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:55:30.300367    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:55:30.340091    8229 logs.go:123] Gathering logs for coredns [571220e0392b] ...
	I0729 10:55:30.340104    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571220e0392b"
	I0729 10:55:30.352316    8229 logs.go:123] Gathering logs for Docker ...
	I0729 10:55:30.352327    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:55:30.375990    8229 logs.go:123] Gathering logs for dmesg ...
	I0729 10:55:30.375999    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:55:30.380368    8229 logs.go:123] Gathering logs for kube-apiserver [120daa333441] ...
	I0729 10:55:30.380374    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 120daa333441"
	I0729 10:55:30.395486    8229 logs.go:123] Gathering logs for coredns [f179b7a6916f] ...
	I0729 10:55:30.395497    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f179b7a6916f"
	I0729 10:55:30.408473    8229 logs.go:123] Gathering logs for coredns [74a37cb60d42] ...
	I0729 10:55:30.408486    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74a37cb60d42"
	I0729 10:55:30.419568    8229 logs.go:123] Gathering logs for container status ...
	I0729 10:55:30.419578    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:55:30.433466    8229 logs.go:123] Gathering logs for kubelet ...
	I0729 10:55:30.433477    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:55:30.469597    8229 logs.go:123] Gathering logs for kube-controller-manager [a047283c1326] ...
	I0729 10:55:30.469606    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a047283c1326"
	I0729 10:55:32.989183    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:55:37.991429    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:55:37.991555    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:55:38.011232    8229 logs.go:276] 1 containers: [120daa333441]
	I0729 10:55:38.011310    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:55:38.022378    8229 logs.go:276] 1 containers: [6cfb0c541a62]
	I0729 10:55:38.022443    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:55:38.032970    8229 logs.go:276] 4 containers: [571220e0392b 19d652647dcb f179b7a6916f 74a37cb60d42]
	I0729 10:55:38.033047    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:55:38.043451    8229 logs.go:276] 1 containers: [60be90d0d8ea]
	I0729 10:55:38.043520    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:55:38.053483    8229 logs.go:276] 1 containers: [5a4490e00797]
	I0729 10:55:38.053545    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:55:38.065355    8229 logs.go:276] 1 containers: [a047283c1326]
	I0729 10:55:38.065428    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:55:38.075583    8229 logs.go:276] 0 containers: []
	W0729 10:55:38.075596    8229 logs.go:278] No container was found matching "kindnet"
	I0729 10:55:38.075651    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:55:38.086091    8229 logs.go:276] 1 containers: [585ed2b764f6]
	I0729 10:55:38.086109    8229 logs.go:123] Gathering logs for kubelet ...
	I0729 10:55:38.086115    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:55:38.122224    8229 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:55:38.122235    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:55:38.157232    8229 logs.go:123] Gathering logs for kube-apiserver [120daa333441] ...
	I0729 10:55:38.157243    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 120daa333441"
	I0729 10:55:38.172166    8229 logs.go:123] Gathering logs for kube-scheduler [60be90d0d8ea] ...
	I0729 10:55:38.172179    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60be90d0d8ea"
	I0729 10:55:38.189915    8229 logs.go:123] Gathering logs for kube-controller-manager [a047283c1326] ...
	I0729 10:55:38.189925    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a047283c1326"
	I0729 10:55:38.207493    8229 logs.go:123] Gathering logs for coredns [f179b7a6916f] ...
	I0729 10:55:38.207503    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f179b7a6916f"
	I0729 10:55:38.218881    8229 logs.go:123] Gathering logs for Docker ...
	I0729 10:55:38.218890    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:55:38.244812    8229 logs.go:123] Gathering logs for container status ...
	I0729 10:55:38.244828    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:55:38.256557    8229 logs.go:123] Gathering logs for dmesg ...
	I0729 10:55:38.256571    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:55:38.260889    8229 logs.go:123] Gathering logs for etcd [6cfb0c541a62] ...
	I0729 10:55:38.260899    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cfb0c541a62"
	I0729 10:55:38.274597    8229 logs.go:123] Gathering logs for coredns [571220e0392b] ...
	I0729 10:55:38.274606    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571220e0392b"
	I0729 10:55:38.286169    8229 logs.go:123] Gathering logs for coredns [19d652647dcb] ...
	I0729 10:55:38.286179    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19d652647dcb"
	I0729 10:55:38.297552    8229 logs.go:123] Gathering logs for coredns [74a37cb60d42] ...
	I0729 10:55:38.297566    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74a37cb60d42"
	I0729 10:55:38.309096    8229 logs.go:123] Gathering logs for kube-proxy [5a4490e00797] ...
	I0729 10:55:38.309106    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a4490e00797"
	I0729 10:55:38.322386    8229 logs.go:123] Gathering logs for storage-provisioner [585ed2b764f6] ...
	I0729 10:55:38.322396    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585ed2b764f6"
	I0729 10:55:40.836371    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:55:45.838621    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:55:45.838838    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:55:45.856712    8229 logs.go:276] 1 containers: [120daa333441]
	I0729 10:55:45.856786    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:55:45.878722    8229 logs.go:276] 1 containers: [6cfb0c541a62]
	I0729 10:55:45.878800    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:55:45.889873    8229 logs.go:276] 4 containers: [571220e0392b 19d652647dcb f179b7a6916f 74a37cb60d42]
	I0729 10:55:45.889953    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:55:45.900250    8229 logs.go:276] 1 containers: [60be90d0d8ea]
	I0729 10:55:45.900314    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:55:45.910323    8229 logs.go:276] 1 containers: [5a4490e00797]
	I0729 10:55:45.910393    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:55:45.920784    8229 logs.go:276] 1 containers: [a047283c1326]
	I0729 10:55:45.920851    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:55:45.931358    8229 logs.go:276] 0 containers: []
	W0729 10:55:45.931372    8229 logs.go:278] No container was found matching "kindnet"
	I0729 10:55:45.931427    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:55:45.941775    8229 logs.go:276] 1 containers: [585ed2b764f6]
	I0729 10:55:45.941794    8229 logs.go:123] Gathering logs for coredns [571220e0392b] ...
	I0729 10:55:45.941800    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571220e0392b"
	I0729 10:55:45.953773    8229 logs.go:123] Gathering logs for dmesg ...
	I0729 10:55:45.953785    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:55:45.959737    8229 logs.go:123] Gathering logs for Docker ...
	I0729 10:55:45.959747    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:55:45.984884    8229 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:55:45.984893    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:55:46.019391    8229 logs.go:123] Gathering logs for coredns [19d652647dcb] ...
	I0729 10:55:46.019400    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19d652647dcb"
	I0729 10:55:46.031653    8229 logs.go:123] Gathering logs for coredns [f179b7a6916f] ...
	I0729 10:55:46.031666    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f179b7a6916f"
	I0729 10:55:46.047406    8229 logs.go:123] Gathering logs for coredns [74a37cb60d42] ...
	I0729 10:55:46.047420    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74a37cb60d42"
	I0729 10:55:46.059787    8229 logs.go:123] Gathering logs for kube-scheduler [60be90d0d8ea] ...
	I0729 10:55:46.059798    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60be90d0d8ea"
	I0729 10:55:46.074273    8229 logs.go:123] Gathering logs for kube-proxy [5a4490e00797] ...
	I0729 10:55:46.074285    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a4490e00797"
	I0729 10:55:46.086778    8229 logs.go:123] Gathering logs for kube-controller-manager [a047283c1326] ...
	I0729 10:55:46.086790    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a047283c1326"
	I0729 10:55:46.111265    8229 logs.go:123] Gathering logs for kubelet ...
	I0729 10:55:46.111275    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:55:46.146843    8229 logs.go:123] Gathering logs for kube-apiserver [120daa333441] ...
	I0729 10:55:46.146854    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 120daa333441"
	I0729 10:55:46.161894    8229 logs.go:123] Gathering logs for etcd [6cfb0c541a62] ...
	I0729 10:55:46.161906    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cfb0c541a62"
	I0729 10:55:46.180196    8229 logs.go:123] Gathering logs for storage-provisioner [585ed2b764f6] ...
	I0729 10:55:46.180209    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585ed2b764f6"
	I0729 10:55:46.192465    8229 logs.go:123] Gathering logs for container status ...
	I0729 10:55:46.192479    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:55:48.706846    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:55:53.709247    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:55:53.709660    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:55:53.749775    8229 logs.go:276] 1 containers: [120daa333441]
	I0729 10:55:53.749914    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:55:53.769554    8229 logs.go:276] 1 containers: [6cfb0c541a62]
	I0729 10:55:53.769653    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:55:53.783852    8229 logs.go:276] 4 containers: [571220e0392b 19d652647dcb f179b7a6916f 74a37cb60d42]
	I0729 10:55:53.783931    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:55:53.802930    8229 logs.go:276] 1 containers: [60be90d0d8ea]
	I0729 10:55:53.802993    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:55:53.814411    8229 logs.go:276] 1 containers: [5a4490e00797]
	I0729 10:55:53.814481    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:55:53.825297    8229 logs.go:276] 1 containers: [a047283c1326]
	I0729 10:55:53.825361    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:55:53.839735    8229 logs.go:276] 0 containers: []
	W0729 10:55:53.839745    8229 logs.go:278] No container was found matching "kindnet"
	I0729 10:55:53.839799    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:55:53.850761    8229 logs.go:276] 1 containers: [585ed2b764f6]
	I0729 10:55:53.850778    8229 logs.go:123] Gathering logs for kube-controller-manager [a047283c1326] ...
	I0729 10:55:53.850785    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a047283c1326"
	I0729 10:55:53.872444    8229 logs.go:123] Gathering logs for storage-provisioner [585ed2b764f6] ...
	I0729 10:55:53.872455    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585ed2b764f6"
	I0729 10:55:53.884656    8229 logs.go:123] Gathering logs for kubelet ...
	I0729 10:55:53.884666    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:55:53.922465    8229 logs.go:123] Gathering logs for coredns [571220e0392b] ...
	I0729 10:55:53.922480    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571220e0392b"
	I0729 10:55:53.945231    8229 logs.go:123] Gathering logs for coredns [19d652647dcb] ...
	I0729 10:55:53.945244    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19d652647dcb"
	I0729 10:55:53.957644    8229 logs.go:123] Gathering logs for coredns [74a37cb60d42] ...
	I0729 10:55:53.957666    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74a37cb60d42"
	I0729 10:55:53.970095    8229 logs.go:123] Gathering logs for kube-proxy [5a4490e00797] ...
	I0729 10:55:53.970108    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a4490e00797"
	I0729 10:55:53.982067    8229 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:55:53.982078    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:55:54.017581    8229 logs.go:123] Gathering logs for kube-scheduler [60be90d0d8ea] ...
	I0729 10:55:54.017591    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60be90d0d8ea"
	I0729 10:55:54.032267    8229 logs.go:123] Gathering logs for dmesg ...
	I0729 10:55:54.032281    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:55:54.036853    8229 logs.go:123] Gathering logs for kube-apiserver [120daa333441] ...
	I0729 10:55:54.036859    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 120daa333441"
	I0729 10:55:54.051830    8229 logs.go:123] Gathering logs for etcd [6cfb0c541a62] ...
	I0729 10:55:54.051841    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cfb0c541a62"
	I0729 10:55:54.066323    8229 logs.go:123] Gathering logs for coredns [f179b7a6916f] ...
	I0729 10:55:54.066339    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f179b7a6916f"
	I0729 10:55:54.078569    8229 logs.go:123] Gathering logs for Docker ...
	I0729 10:55:54.078584    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:55:54.102413    8229 logs.go:123] Gathering logs for container status ...
	I0729 10:55:54.102423    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:55:56.616118    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:56:01.617863    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:56:01.618017    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:56:01.636055    8229 logs.go:276] 1 containers: [120daa333441]
	I0729 10:56:01.636132    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:56:01.649430    8229 logs.go:276] 1 containers: [6cfb0c541a62]
	I0729 10:56:01.649517    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:56:01.661209    8229 logs.go:276] 4 containers: [571220e0392b 19d652647dcb f179b7a6916f 74a37cb60d42]
	I0729 10:56:01.661277    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:56:01.671799    8229 logs.go:276] 1 containers: [60be90d0d8ea]
	I0729 10:56:01.671860    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:56:01.684276    8229 logs.go:276] 1 containers: [5a4490e00797]
	I0729 10:56:01.684331    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:56:01.697487    8229 logs.go:276] 1 containers: [a047283c1326]
	I0729 10:56:01.697552    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:56:01.708435    8229 logs.go:276] 0 containers: []
	W0729 10:56:01.708447    8229 logs.go:278] No container was found matching "kindnet"
	I0729 10:56:01.708498    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:56:01.718980    8229 logs.go:276] 1 containers: [585ed2b764f6]
	I0729 10:56:01.718998    8229 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:56:01.719003    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:56:01.758296    8229 logs.go:123] Gathering logs for kube-apiserver [120daa333441] ...
	I0729 10:56:01.758310    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 120daa333441"
	I0729 10:56:01.772936    8229 logs.go:123] Gathering logs for kube-scheduler [60be90d0d8ea] ...
	I0729 10:56:01.772947    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60be90d0d8ea"
	I0729 10:56:01.795283    8229 logs.go:123] Gathering logs for container status ...
	I0729 10:56:01.795293    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:56:01.806698    8229 logs.go:123] Gathering logs for dmesg ...
	I0729 10:56:01.806711    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:56:01.811875    8229 logs.go:123] Gathering logs for kube-proxy [5a4490e00797] ...
	I0729 10:56:01.811883    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a4490e00797"
	I0729 10:56:01.824280    8229 logs.go:123] Gathering logs for storage-provisioner [585ed2b764f6] ...
	I0729 10:56:01.824291    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585ed2b764f6"
	I0729 10:56:01.836221    8229 logs.go:123] Gathering logs for Docker ...
	I0729 10:56:01.836234    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:56:01.859593    8229 logs.go:123] Gathering logs for kubelet ...
	I0729 10:56:01.859605    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:56:01.895228    8229 logs.go:123] Gathering logs for coredns [19d652647dcb] ...
	I0729 10:56:01.895235    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19d652647dcb"
	I0729 10:56:01.906507    8229 logs.go:123] Gathering logs for coredns [f179b7a6916f] ...
	I0729 10:56:01.906519    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f179b7a6916f"
	I0729 10:56:01.918298    8229 logs.go:123] Gathering logs for kube-controller-manager [a047283c1326] ...
	I0729 10:56:01.918308    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a047283c1326"
	I0729 10:56:01.935637    8229 logs.go:123] Gathering logs for etcd [6cfb0c541a62] ...
	I0729 10:56:01.935648    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cfb0c541a62"
	I0729 10:56:01.950483    8229 logs.go:123] Gathering logs for coredns [571220e0392b] ...
	I0729 10:56:01.950496    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571220e0392b"
	I0729 10:56:01.965870    8229 logs.go:123] Gathering logs for coredns [74a37cb60d42] ...
	I0729 10:56:01.965884    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74a37cb60d42"
	I0729 10:56:04.480653    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:56:09.482695    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:56:09.482960    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:56:09.506137    8229 logs.go:276] 1 containers: [120daa333441]
	I0729 10:56:09.506262    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:56:09.523057    8229 logs.go:276] 1 containers: [6cfb0c541a62]
	I0729 10:56:09.523135    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:56:09.536004    8229 logs.go:276] 4 containers: [571220e0392b 19d652647dcb f179b7a6916f 74a37cb60d42]
	I0729 10:56:09.536083    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:56:09.549243    8229 logs.go:276] 1 containers: [60be90d0d8ea]
	I0729 10:56:09.549312    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:56:09.560657    8229 logs.go:276] 1 containers: [5a4490e00797]
	I0729 10:56:09.560728    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:56:09.571617    8229 logs.go:276] 1 containers: [a047283c1326]
	I0729 10:56:09.571687    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:56:09.582041    8229 logs.go:276] 0 containers: []
	W0729 10:56:09.582052    8229 logs.go:278] No container was found matching "kindnet"
	I0729 10:56:09.582116    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:56:09.593123    8229 logs.go:276] 1 containers: [585ed2b764f6]
	I0729 10:56:09.593139    8229 logs.go:123] Gathering logs for coredns [571220e0392b] ...
	I0729 10:56:09.593146    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571220e0392b"
	I0729 10:56:09.604622    8229 logs.go:123] Gathering logs for kube-controller-manager [a047283c1326] ...
	I0729 10:56:09.604635    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a047283c1326"
	I0729 10:56:09.622408    8229 logs.go:123] Gathering logs for kubelet ...
	I0729 10:56:09.622420    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:56:09.659865    8229 logs.go:123] Gathering logs for etcd [6cfb0c541a62] ...
	I0729 10:56:09.659873    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cfb0c541a62"
	I0729 10:56:09.673678    8229 logs.go:123] Gathering logs for coredns [19d652647dcb] ...
	I0729 10:56:09.673688    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19d652647dcb"
	I0729 10:56:09.687055    8229 logs.go:123] Gathering logs for coredns [f179b7a6916f] ...
	I0729 10:56:09.687070    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f179b7a6916f"
	I0729 10:56:09.699532    8229 logs.go:123] Gathering logs for storage-provisioner [585ed2b764f6] ...
	I0729 10:56:09.699547    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585ed2b764f6"
	I0729 10:56:09.713137    8229 logs.go:123] Gathering logs for Docker ...
	I0729 10:56:09.713148    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:56:09.738488    8229 logs.go:123] Gathering logs for coredns [74a37cb60d42] ...
	I0729 10:56:09.738496    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74a37cb60d42"
	I0729 10:56:09.755049    8229 logs.go:123] Gathering logs for kube-scheduler [60be90d0d8ea] ...
	I0729 10:56:09.755060    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60be90d0d8ea"
	I0729 10:56:09.769176    8229 logs.go:123] Gathering logs for container status ...
	I0729 10:56:09.769186    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:56:09.780725    8229 logs.go:123] Gathering logs for dmesg ...
	I0729 10:56:09.780738    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:56:09.785085    8229 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:56:09.785094    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:56:09.822752    8229 logs.go:123] Gathering logs for kube-apiserver [120daa333441] ...
	I0729 10:56:09.822767    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 120daa333441"
	I0729 10:56:09.837395    8229 logs.go:123] Gathering logs for kube-proxy [5a4490e00797] ...
	I0729 10:56:09.837405    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a4490e00797"
	I0729 10:56:12.351556    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:56:17.353795    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:56:17.353938    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:56:17.364591    8229 logs.go:276] 1 containers: [120daa333441]
	I0729 10:56:17.364669    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:56:17.375006    8229 logs.go:276] 1 containers: [6cfb0c541a62]
	I0729 10:56:17.375073    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:56:17.385725    8229 logs.go:276] 4 containers: [571220e0392b 19d652647dcb f179b7a6916f 74a37cb60d42]
	I0729 10:56:17.385795    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:56:17.397765    8229 logs.go:276] 1 containers: [60be90d0d8ea]
	I0729 10:56:17.397829    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:56:17.408595    8229 logs.go:276] 1 containers: [5a4490e00797]
	I0729 10:56:17.408661    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:56:17.419396    8229 logs.go:276] 1 containers: [a047283c1326]
	I0729 10:56:17.419457    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:56:17.436017    8229 logs.go:276] 0 containers: []
	W0729 10:56:17.436028    8229 logs.go:278] No container was found matching "kindnet"
	I0729 10:56:17.436078    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:56:17.446407    8229 logs.go:276] 1 containers: [585ed2b764f6]
	I0729 10:56:17.446423    8229 logs.go:123] Gathering logs for kube-apiserver [120daa333441] ...
	I0729 10:56:17.446428    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 120daa333441"
	I0729 10:56:17.460516    8229 logs.go:123] Gathering logs for coredns [f179b7a6916f] ...
	I0729 10:56:17.460529    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f179b7a6916f"
	I0729 10:56:17.472306    8229 logs.go:123] Gathering logs for dmesg ...
	I0729 10:56:17.472319    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:56:17.476701    8229 logs.go:123] Gathering logs for etcd [6cfb0c541a62] ...
	I0729 10:56:17.476707    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cfb0c541a62"
	I0729 10:56:17.490860    8229 logs.go:123] Gathering logs for coredns [571220e0392b] ...
	I0729 10:56:17.490872    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571220e0392b"
	I0729 10:56:17.502857    8229 logs.go:123] Gathering logs for coredns [74a37cb60d42] ...
	I0729 10:56:17.502868    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74a37cb60d42"
	I0729 10:56:17.515175    8229 logs.go:123] Gathering logs for kube-controller-manager [a047283c1326] ...
	I0729 10:56:17.515184    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a047283c1326"
	I0729 10:56:17.533955    8229 logs.go:123] Gathering logs for storage-provisioner [585ed2b764f6] ...
	I0729 10:56:17.533970    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585ed2b764f6"
	I0729 10:56:17.545512    8229 logs.go:123] Gathering logs for container status ...
	I0729 10:56:17.545533    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:56:17.557748    8229 logs.go:123] Gathering logs for kubelet ...
	I0729 10:56:17.557761    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:56:17.595363    8229 logs.go:123] Gathering logs for coredns [19d652647dcb] ...
	I0729 10:56:17.595373    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19d652647dcb"
	I0729 10:56:17.607136    8229 logs.go:123] Gathering logs for kube-scheduler [60be90d0d8ea] ...
	I0729 10:56:17.607146    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60be90d0d8ea"
	I0729 10:56:17.621897    8229 logs.go:123] Gathering logs for kube-proxy [5a4490e00797] ...
	I0729 10:56:17.621907    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a4490e00797"
	I0729 10:56:17.633957    8229 logs.go:123] Gathering logs for Docker ...
	I0729 10:56:17.633967    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:56:17.658968    8229 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:56:17.658978    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:56:20.196069    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:56:25.197137    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:56:25.197269    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:56:25.208782    8229 logs.go:276] 1 containers: [120daa333441]
	I0729 10:56:25.208861    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:56:25.219479    8229 logs.go:276] 1 containers: [6cfb0c541a62]
	I0729 10:56:25.219546    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:56:25.230726    8229 logs.go:276] 4 containers: [571220e0392b 19d652647dcb f179b7a6916f 74a37cb60d42]
	I0729 10:56:25.230803    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:56:25.243878    8229 logs.go:276] 1 containers: [60be90d0d8ea]
	I0729 10:56:25.243950    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:56:25.256462    8229 logs.go:276] 1 containers: [5a4490e00797]
	I0729 10:56:25.256529    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:56:25.267514    8229 logs.go:276] 1 containers: [a047283c1326]
	I0729 10:56:25.267581    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:56:25.278990    8229 logs.go:276] 0 containers: []
	W0729 10:56:25.279009    8229 logs.go:278] No container was found matching "kindnet"
	I0729 10:56:25.279068    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:56:25.289880    8229 logs.go:276] 1 containers: [585ed2b764f6]
	I0729 10:56:25.289897    8229 logs.go:123] Gathering logs for coredns [571220e0392b] ...
	I0729 10:56:25.289909    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571220e0392b"
	I0729 10:56:25.302046    8229 logs.go:123] Gathering logs for coredns [19d652647dcb] ...
	I0729 10:56:25.302057    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19d652647dcb"
	I0729 10:56:25.314549    8229 logs.go:123] Gathering logs for kube-controller-manager [a047283c1326] ...
	I0729 10:56:25.314561    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a047283c1326"
	I0729 10:56:25.340529    8229 logs.go:123] Gathering logs for kubelet ...
	I0729 10:56:25.340547    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:56:25.378790    8229 logs.go:123] Gathering logs for etcd [6cfb0c541a62] ...
	I0729 10:56:25.378810    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cfb0c541a62"
	I0729 10:56:25.394242    8229 logs.go:123] Gathering logs for storage-provisioner [585ed2b764f6] ...
	I0729 10:56:25.394256    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585ed2b764f6"
	I0729 10:56:25.406745    8229 logs.go:123] Gathering logs for coredns [74a37cb60d42] ...
	I0729 10:56:25.406757    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74a37cb60d42"
	I0729 10:56:25.426218    8229 logs.go:123] Gathering logs for kube-scheduler [60be90d0d8ea] ...
	I0729 10:56:25.426229    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60be90d0d8ea"
	I0729 10:56:25.441265    8229 logs.go:123] Gathering logs for kube-proxy [5a4490e00797] ...
	I0729 10:56:25.441276    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a4490e00797"
	I0729 10:56:25.453116    8229 logs.go:123] Gathering logs for Docker ...
	I0729 10:56:25.453130    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:56:25.478686    8229 logs.go:123] Gathering logs for container status ...
	I0729 10:56:25.478701    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:56:25.494558    8229 logs.go:123] Gathering logs for dmesg ...
	I0729 10:56:25.494570    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:56:25.499298    8229 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:56:25.499308    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:56:25.535941    8229 logs.go:123] Gathering logs for kube-apiserver [120daa333441] ...
	I0729 10:56:25.535952    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 120daa333441"
	I0729 10:56:25.554594    8229 logs.go:123] Gathering logs for coredns [f179b7a6916f] ...
	I0729 10:56:25.554607    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f179b7a6916f"
	I0729 10:56:28.070340    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:56:33.072577    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:56:33.072958    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:56:33.109701    8229 logs.go:276] 1 containers: [120daa333441]
	I0729 10:56:33.109835    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:56:33.129392    8229 logs.go:276] 1 containers: [6cfb0c541a62]
	I0729 10:56:33.129498    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:56:33.144082    8229 logs.go:276] 4 containers: [571220e0392b 19d652647dcb f179b7a6916f 74a37cb60d42]
	I0729 10:56:33.144159    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:56:33.156187    8229 logs.go:276] 1 containers: [60be90d0d8ea]
	I0729 10:56:33.156254    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:56:33.166875    8229 logs.go:276] 1 containers: [5a4490e00797]
	I0729 10:56:33.166947    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:56:33.177754    8229 logs.go:276] 1 containers: [a047283c1326]
	I0729 10:56:33.177830    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:56:33.187908    8229 logs.go:276] 0 containers: []
	W0729 10:56:33.187918    8229 logs.go:278] No container was found matching "kindnet"
	I0729 10:56:33.187981    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:56:33.199411    8229 logs.go:276] 1 containers: [585ed2b764f6]
	I0729 10:56:33.199427    8229 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:56:33.199433    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:56:33.235339    8229 logs.go:123] Gathering logs for kube-apiserver [120daa333441] ...
	I0729 10:56:33.235351    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 120daa333441"
	I0729 10:56:33.251497    8229 logs.go:123] Gathering logs for etcd [6cfb0c541a62] ...
	I0729 10:56:33.251507    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cfb0c541a62"
	I0729 10:56:33.265402    8229 logs.go:123] Gathering logs for coredns [19d652647dcb] ...
	I0729 10:56:33.265412    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19d652647dcb"
	I0729 10:56:33.277901    8229 logs.go:123] Gathering logs for coredns [f179b7a6916f] ...
	I0729 10:56:33.277913    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f179b7a6916f"
	I0729 10:56:33.289833    8229 logs.go:123] Gathering logs for kube-proxy [5a4490e00797] ...
	I0729 10:56:33.289844    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a4490e00797"
	I0729 10:56:33.301585    8229 logs.go:123] Gathering logs for dmesg ...
	I0729 10:56:33.301596    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:56:33.306639    8229 logs.go:123] Gathering logs for coredns [571220e0392b] ...
	I0729 10:56:33.306649    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571220e0392b"
	I0729 10:56:33.319243    8229 logs.go:123] Gathering logs for coredns [74a37cb60d42] ...
	I0729 10:56:33.319252    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74a37cb60d42"
	I0729 10:56:33.331376    8229 logs.go:123] Gathering logs for kube-scheduler [60be90d0d8ea] ...
	I0729 10:56:33.331386    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60be90d0d8ea"
	I0729 10:56:33.346487    8229 logs.go:123] Gathering logs for container status ...
	I0729 10:56:33.346502    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:56:33.358313    8229 logs.go:123] Gathering logs for kubelet ...
	I0729 10:56:33.358324    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:56:33.395649    8229 logs.go:123] Gathering logs for Docker ...
	I0729 10:56:33.395657    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:56:33.419167    8229 logs.go:123] Gathering logs for kube-controller-manager [a047283c1326] ...
	I0729 10:56:33.419177    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a047283c1326"
	I0729 10:56:33.450674    8229 logs.go:123] Gathering logs for storage-provisioner [585ed2b764f6] ...
	I0729 10:56:33.450684    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585ed2b764f6"
	I0729 10:56:35.964379    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:56:40.966636    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:56:40.966813    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:56:40.978142    8229 logs.go:276] 1 containers: [120daa333441]
	I0729 10:56:40.978221    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:56:40.991273    8229 logs.go:276] 1 containers: [6cfb0c541a62]
	I0729 10:56:40.991347    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:56:41.003707    8229 logs.go:276] 6 containers: [7a8a34a606b3 68fd6c91f96e 571220e0392b 19d652647dcb f179b7a6916f 74a37cb60d42]
	I0729 10:56:41.003787    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:56:41.014517    8229 logs.go:276] 1 containers: [60be90d0d8ea]
	I0729 10:56:41.014583    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:56:41.029969    8229 logs.go:276] 1 containers: [5a4490e00797]
	I0729 10:56:41.030034    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:56:41.040532    8229 logs.go:276] 1 containers: [a047283c1326]
	I0729 10:56:41.040601    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:56:41.051534    8229 logs.go:276] 0 containers: []
	W0729 10:56:41.051544    8229 logs.go:278] No container was found matching "kindnet"
	I0729 10:56:41.051600    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:56:41.062482    8229 logs.go:276] 1 containers: [585ed2b764f6]
	I0729 10:56:41.062496    8229 logs.go:123] Gathering logs for container status ...
	I0729 10:56:41.062501    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:56:41.074226    8229 logs.go:123] Gathering logs for coredns [7a8a34a606b3] ...
	I0729 10:56:41.074236    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a8a34a606b3"
	I0729 10:56:41.085415    8229 logs.go:123] Gathering logs for coredns [19d652647dcb] ...
	I0729 10:56:41.085428    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19d652647dcb"
	I0729 10:56:41.101968    8229 logs.go:123] Gathering logs for coredns [74a37cb60d42] ...
	I0729 10:56:41.101981    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74a37cb60d42"
	I0729 10:56:41.115974    8229 logs.go:123] Gathering logs for coredns [f179b7a6916f] ...
	I0729 10:56:41.115985    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f179b7a6916f"
	I0729 10:56:41.133199    8229 logs.go:123] Gathering logs for kube-proxy [5a4490e00797] ...
	I0729 10:56:41.133211    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a4490e00797"
	I0729 10:56:41.145290    8229 logs.go:123] Gathering logs for Docker ...
	I0729 10:56:41.145302    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:56:41.170469    8229 logs.go:123] Gathering logs for kubelet ...
	I0729 10:56:41.170490    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:56:41.212504    8229 logs.go:123] Gathering logs for kube-apiserver [120daa333441] ...
	I0729 10:56:41.212523    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 120daa333441"
	I0729 10:56:41.227761    8229 logs.go:123] Gathering logs for coredns [68fd6c91f96e] ...
	I0729 10:56:41.227771    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68fd6c91f96e"
	I0729 10:56:41.238985    8229 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:56:41.238999    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:56:41.283639    8229 logs.go:123] Gathering logs for storage-provisioner [585ed2b764f6] ...
	I0729 10:56:41.283653    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585ed2b764f6"
	I0729 10:56:41.297299    8229 logs.go:123] Gathering logs for kube-scheduler [60be90d0d8ea] ...
	I0729 10:56:41.297315    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60be90d0d8ea"
	I0729 10:56:41.316004    8229 logs.go:123] Gathering logs for kube-controller-manager [a047283c1326] ...
	I0729 10:56:41.316019    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a047283c1326"
	I0729 10:56:41.333870    8229 logs.go:123] Gathering logs for dmesg ...
	I0729 10:56:41.333884    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:56:41.338696    8229 logs.go:123] Gathering logs for etcd [6cfb0c541a62] ...
	I0729 10:56:41.338704    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cfb0c541a62"
	I0729 10:56:41.353332    8229 logs.go:123] Gathering logs for coredns [571220e0392b] ...
	I0729 10:56:41.353345    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571220e0392b"
	I0729 10:56:43.867820    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:56:48.869709    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:56:48.875212    8229 out.go:177] 
	W0729 10:56:48.879117    8229 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0729 10:56:48.879132    8229 out.go:239] * 
	* 
	W0729 10:56:48.880332    8229 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 10:56:48.891147    8229 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-504000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-07-29 10:56:49.001315 -0700 PDT m=+1311.038778084
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-504000 -n running-upgrade-504000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-504000 -n running-upgrade-504000: exit status 2 (15.773180833s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-504000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-917000          | force-systemd-flag-917000 | jenkins | v1.33.1 | 29 Jul 24 10:47 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-810000              | force-systemd-env-810000  | jenkins | v1.33.1 | 29 Jul 24 10:47 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-810000           | force-systemd-env-810000  | jenkins | v1.33.1 | 29 Jul 24 10:47 PDT | 29 Jul 24 10:47 PDT |
	| start   | -p docker-flags-400000                | docker-flags-400000       | jenkins | v1.33.1 | 29 Jul 24 10:47 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-917000             | force-systemd-flag-917000 | jenkins | v1.33.1 | 29 Jul 24 10:47 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-917000          | force-systemd-flag-917000 | jenkins | v1.33.1 | 29 Jul 24 10:47 PDT | 29 Jul 24 10:47 PDT |
	| start   | -p cert-expiration-864000             | cert-expiration-864000    | jenkins | v1.33.1 | 29 Jul 24 10:47 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-400000 ssh               | docker-flags-400000       | jenkins | v1.33.1 | 29 Jul 24 10:47 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-400000 ssh               | docker-flags-400000       | jenkins | v1.33.1 | 29 Jul 24 10:47 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-400000                | docker-flags-400000       | jenkins | v1.33.1 | 29 Jul 24 10:47 PDT | 29 Jul 24 10:47 PDT |
	| start   | -p cert-options-952000                | cert-options-952000       | jenkins | v1.33.1 | 29 Jul 24 10:47 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-952000 ssh               | cert-options-952000       | jenkins | v1.33.1 | 29 Jul 24 10:47 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-952000 -- sudo        | cert-options-952000       | jenkins | v1.33.1 | 29 Jul 24 10:47 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-952000                | cert-options-952000       | jenkins | v1.33.1 | 29 Jul 24 10:47 PDT | 29 Jul 24 10:47 PDT |
	| start   | -p running-upgrade-504000             | minikube                  | jenkins | v1.26.0 | 29 Jul 24 10:47 PDT | 29 Jul 24 10:48 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-504000             | running-upgrade-504000    | jenkins | v1.33.1 | 29 Jul 24 10:48 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-864000             | cert-expiration-864000    | jenkins | v1.33.1 | 29 Jul 24 10:50 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-864000             | cert-expiration-864000    | jenkins | v1.33.1 | 29 Jul 24 10:50 PDT | 29 Jul 24 10:50 PDT |
	| start   | -p kubernetes-upgrade-786000          | kubernetes-upgrade-786000 | jenkins | v1.33.1 | 29 Jul 24 10:50 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-786000          | kubernetes-upgrade-786000 | jenkins | v1.33.1 | 29 Jul 24 10:50 PDT | 29 Jul 24 10:50 PDT |
	| start   | -p kubernetes-upgrade-786000          | kubernetes-upgrade-786000 | jenkins | v1.33.1 | 29 Jul 24 10:50 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-786000          | kubernetes-upgrade-786000 | jenkins | v1.33.1 | 29 Jul 24 10:50 PDT | 29 Jul 24 10:50 PDT |
	| start   | -p stopped-upgrade-294000             | minikube                  | jenkins | v1.26.0 | 29 Jul 24 10:50 PDT | 29 Jul 24 10:51 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-294000 stop           | minikube                  | jenkins | v1.26.0 | 29 Jul 24 10:51 PDT | 29 Jul 24 10:51 PDT |
	| start   | -p stopped-upgrade-294000             | stopped-upgrade-294000    | jenkins | v1.33.1 | 29 Jul 24 10:51 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 10:51:47
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 10:51:47.936503    8358 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:51:47.936681    8358 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:51:47.936685    8358 out.go:304] Setting ErrFile to fd 2...
	I0729 10:51:47.936689    8358 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:51:47.936856    8358 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 10:51:47.938224    8358 out.go:298] Setting JSON to false
	I0729 10:51:47.957715    8358 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4876,"bootTime":1722270631,"procs":452,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 10:51:47.957793    8358 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 10:51:47.962725    8358 out.go:177] * [stopped-upgrade-294000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 10:51:47.969644    8358 out.go:177]   - MINIKUBE_LOCATION=19339
	I0729 10:51:47.969704    8358 notify.go:220] Checking for updates...
	I0729 10:51:47.977511    8358 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	I0729 10:51:47.980622    8358 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 10:51:47.983652    8358 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:51:47.986649    8358 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	I0729 10:51:47.989661    8358 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 10:51:47.993104    8358 config.go:182] Loaded profile config "stopped-upgrade-294000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 10:51:47.996581    8358 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 10:51:47.999625    8358 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:51:48.003662    8358 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 10:51:48.010584    8358 start.go:297] selected driver: qemu2
	I0729 10:51:48.010592    8358 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-294000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51474 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-294000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0729 10:51:48.010644    8358 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:51:48.013616    8358 cni.go:84] Creating CNI manager for ""
	I0729 10:51:48.013633    8358 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 10:51:48.013665    8358 start.go:340] cluster config:
	{Name:stopped-upgrade-294000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51474 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-294000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0729 10:51:48.013722    8358 iso.go:125] acquiring lock: {Name:mk2808e0b9510c77af2c0862d3450f3cc996acba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:51:48.021607    8358 out.go:177] * Starting "stopped-upgrade-294000" primary control-plane node in "stopped-upgrade-294000" cluster
	I0729 10:51:48.025614    8358 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0729 10:51:48.025634    8358 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0729 10:51:48.025645    8358 cache.go:56] Caching tarball of preloaded images
	I0729 10:51:48.025708    8358 preload.go:172] Found /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 10:51:48.025715    8358 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0729 10:51:48.025778    8358 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/stopped-upgrade-294000/config.json ...
	I0729 10:51:48.026290    8358 start.go:360] acquireMachinesLock for stopped-upgrade-294000: {Name:mk9e50f97d2386769ca02400541d9347efa17b63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:51:48.026327    8358 start.go:364] duration metric: took 29.792µs to acquireMachinesLock for "stopped-upgrade-294000"
	I0729 10:51:48.026338    8358 start.go:96] Skipping create...Using existing machine configuration
	I0729 10:51:48.026345    8358 fix.go:54] fixHost starting: 
	I0729 10:51:48.026478    8358 fix.go:112] recreateIfNeeded on stopped-upgrade-294000: state=Stopped err=<nil>
	W0729 10:51:48.026487    8358 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 10:51:48.030611    8358 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-294000" ...
	I0729 10:51:47.101580    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:51:48.038425    8358 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:51:48.038502    8358 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/stopped-upgrade-294000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/stopped-upgrade-294000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/stopped-upgrade-294000/qemu.pid -nic user,model=virtio,hostfwd=tcp::51439-:22,hostfwd=tcp::51440-:2376,hostname=stopped-upgrade-294000 -daemonize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/stopped-upgrade-294000/disk.qcow2
	I0729 10:51:48.089690    8358 main.go:141] libmachine: STDOUT: 
	I0729 10:51:48.089708    8358 main.go:141] libmachine: STDERR: 
	I0729 10:51:48.089713    8358 main.go:141] libmachine: Waiting for VM to start (ssh -p 51439 docker@127.0.0.1)...
	I0729 10:51:52.103916    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:51:52.104364    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:51:52.152770    8229 logs.go:276] 2 containers: [8c4ad5249bc8 90622bb860e2]
	I0729 10:51:52.152917    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:51:52.172331    8229 logs.go:276] 2 containers: [c3e1f9023336 c4b3c8945276]
	I0729 10:51:52.172435    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:51:52.186785    8229 logs.go:276] 1 containers: [92f05bbf9ced]
	I0729 10:51:52.186860    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:51:52.199600    8229 logs.go:276] 2 containers: [7cc1c8aea7f7 6565a6abc140]
	I0729 10:51:52.199674    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:51:52.215433    8229 logs.go:276] 1 containers: [7243039f43b7]
	I0729 10:51:52.215503    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:51:52.226126    8229 logs.go:276] 2 containers: [4aa9b4b13ef3 87d43b7d580e]
	I0729 10:51:52.226201    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:51:52.236847    8229 logs.go:276] 0 containers: []
	W0729 10:51:52.236858    8229 logs.go:278] No container was found matching "kindnet"
	I0729 10:51:52.236911    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:51:52.247181    8229 logs.go:276] 1 containers: [fcf6defc29a4]
	I0729 10:51:52.247199    8229 logs.go:123] Gathering logs for kubelet ...
	I0729 10:51:52.247205    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:51:52.282750    8229 logs.go:123] Gathering logs for kube-apiserver [90622bb860e2] ...
	I0729 10:51:52.282759    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90622bb860e2"
	I0729 10:51:52.304379    8229 logs.go:123] Gathering logs for etcd [c4b3c8945276] ...
	I0729 10:51:52.304389    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4b3c8945276"
	I0729 10:51:52.322807    8229 logs.go:123] Gathering logs for kube-proxy [7243039f43b7] ...
	I0729 10:51:52.322820    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7243039f43b7"
	I0729 10:51:52.334444    8229 logs.go:123] Gathering logs for etcd [c3e1f9023336] ...
	I0729 10:51:52.334454    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3e1f9023336"
	I0729 10:51:52.351526    8229 logs.go:123] Gathering logs for coredns [92f05bbf9ced] ...
	I0729 10:51:52.351538    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92f05bbf9ced"
	I0729 10:51:52.362437    8229 logs.go:123] Gathering logs for Docker ...
	I0729 10:51:52.362450    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:51:52.387543    8229 logs.go:123] Gathering logs for dmesg ...
	I0729 10:51:52.387554    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:51:52.391718    8229 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:51:52.391727    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:51:52.427266    8229 logs.go:123] Gathering logs for kube-controller-manager [4aa9b4b13ef3] ...
	I0729 10:51:52.427278    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4aa9b4b13ef3"
	I0729 10:51:52.448243    8229 logs.go:123] Gathering logs for kube-controller-manager [87d43b7d580e] ...
	I0729 10:51:52.448255    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87d43b7d580e"
	I0729 10:51:52.459576    8229 logs.go:123] Gathering logs for storage-provisioner [fcf6defc29a4] ...
	I0729 10:51:52.459589    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcf6defc29a4"
	I0729 10:51:52.471470    8229 logs.go:123] Gathering logs for kube-apiserver [8c4ad5249bc8] ...
	I0729 10:51:52.471480    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c4ad5249bc8"
	I0729 10:51:52.486665    8229 logs.go:123] Gathering logs for kube-scheduler [7cc1c8aea7f7] ...
	I0729 10:51:52.486676    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc1c8aea7f7"
	I0729 10:51:52.499143    8229 logs.go:123] Gathering logs for kube-scheduler [6565a6abc140] ...
	I0729 10:51:52.499154    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6565a6abc140"
	I0729 10:51:52.510911    8229 logs.go:123] Gathering logs for container status ...
	I0729 10:51:52.510925    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:51:55.025669    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:52:00.025980    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:52:00.026117    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:52:00.037435    8229 logs.go:276] 2 containers: [8c4ad5249bc8 90622bb860e2]
	I0729 10:52:00.037509    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:52:00.048456    8229 logs.go:276] 2 containers: [c3e1f9023336 c4b3c8945276]
	I0729 10:52:00.048537    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:52:00.059270    8229 logs.go:276] 1 containers: [92f05bbf9ced]
	I0729 10:52:00.059340    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:52:00.070375    8229 logs.go:276] 2 containers: [7cc1c8aea7f7 6565a6abc140]
	I0729 10:52:00.070447    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:52:00.083226    8229 logs.go:276] 1 containers: [7243039f43b7]
	I0729 10:52:00.083304    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:52:00.095024    8229 logs.go:276] 2 containers: [4aa9b4b13ef3 87d43b7d580e]
	I0729 10:52:00.095107    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:52:00.107078    8229 logs.go:276] 0 containers: []
	W0729 10:52:00.107090    8229 logs.go:278] No container was found matching "kindnet"
	I0729 10:52:00.107152    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:52:00.117747    8229 logs.go:276] 1 containers: [fcf6defc29a4]
	I0729 10:52:00.117764    8229 logs.go:123] Gathering logs for kube-controller-manager [87d43b7d580e] ...
	I0729 10:52:00.117770    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87d43b7d580e"
	I0729 10:52:00.133472    8229 logs.go:123] Gathering logs for dmesg ...
	I0729 10:52:00.133484    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:52:00.137919    8229 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:52:00.137935    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:52:00.173011    8229 logs.go:123] Gathering logs for kube-scheduler [6565a6abc140] ...
	I0729 10:52:00.173021    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6565a6abc140"
	I0729 10:52:00.184449    8229 logs.go:123] Gathering logs for kube-controller-manager [4aa9b4b13ef3] ...
	I0729 10:52:00.184462    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4aa9b4b13ef3"
	I0729 10:52:00.201233    8229 logs.go:123] Gathering logs for storage-provisioner [fcf6defc29a4] ...
	I0729 10:52:00.201244    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcf6defc29a4"
	I0729 10:52:00.212994    8229 logs.go:123] Gathering logs for container status ...
	I0729 10:52:00.213005    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:52:00.226140    8229 logs.go:123] Gathering logs for kubelet ...
	I0729 10:52:00.226156    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:52:00.263516    8229 logs.go:123] Gathering logs for kube-apiserver [90622bb860e2] ...
	I0729 10:52:00.263527    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90622bb860e2"
	I0729 10:52:00.285052    8229 logs.go:123] Gathering logs for etcd [c3e1f9023336] ...
	I0729 10:52:00.285068    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3e1f9023336"
	I0729 10:52:00.299844    8229 logs.go:123] Gathering logs for kube-scheduler [7cc1c8aea7f7] ...
	I0729 10:52:00.299855    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc1c8aea7f7"
	I0729 10:52:00.315911    8229 logs.go:123] Gathering logs for etcd [c4b3c8945276] ...
	I0729 10:52:00.315923    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4b3c8945276"
	I0729 10:52:00.329670    8229 logs.go:123] Gathering logs for coredns [92f05bbf9ced] ...
	I0729 10:52:00.329682    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92f05bbf9ced"
	I0729 10:52:00.347228    8229 logs.go:123] Gathering logs for kube-apiserver [8c4ad5249bc8] ...
	I0729 10:52:00.347240    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c4ad5249bc8"
	I0729 10:52:00.362611    8229 logs.go:123] Gathering logs for kube-proxy [7243039f43b7] ...
	I0729 10:52:00.362623    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7243039f43b7"
	I0729 10:52:00.376063    8229 logs.go:123] Gathering logs for Docker ...
	I0729 10:52:00.376075    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:52:02.903091    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:52:07.911175    8358 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/stopped-upgrade-294000/config.json ...
	I0729 10:52:07.911552    8358 machine.go:94] provisionDockerMachine start ...
	I0729 10:52:07.911629    8358 main.go:141] libmachine: Using SSH client type: native
	I0729 10:52:07.911841    8358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100d2ea10] 0x100d31270 <nil>  [] 0s} localhost 51439 <nil> <nil>}
	I0729 10:52:07.911849    8358 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 10:52:07.905290    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:52:07.905564    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:52:07.923837    8229 logs.go:276] 2 containers: [8c4ad5249bc8 90622bb860e2]
	I0729 10:52:07.923935    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:52:07.937551    8229 logs.go:276] 2 containers: [c3e1f9023336 c4b3c8945276]
	I0729 10:52:07.937624    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:52:07.949172    8229 logs.go:276] 1 containers: [92f05bbf9ced]
	I0729 10:52:07.949243    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:52:07.960099    8229 logs.go:276] 2 containers: [7cc1c8aea7f7 6565a6abc140]
	I0729 10:52:07.960168    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:52:07.970964    8229 logs.go:276] 1 containers: [7243039f43b7]
	I0729 10:52:07.971039    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:52:07.983620    8229 logs.go:276] 2 containers: [4aa9b4b13ef3 87d43b7d580e]
	I0729 10:52:07.983682    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:52:07.999731    8229 logs.go:276] 0 containers: []
	W0729 10:52:07.999742    8229 logs.go:278] No container was found matching "kindnet"
	I0729 10:52:07.999800    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:52:08.011572    8229 logs.go:276] 1 containers: [fcf6defc29a4]
	I0729 10:52:08.011588    8229 logs.go:123] Gathering logs for dmesg ...
	I0729 10:52:08.011596    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:52:08.016118    8229 logs.go:123] Gathering logs for etcd [c3e1f9023336] ...
	I0729 10:52:08.016128    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3e1f9023336"
	I0729 10:52:08.030258    8229 logs.go:123] Gathering logs for storage-provisioner [fcf6defc29a4] ...
	I0729 10:52:08.030272    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcf6defc29a4"
	I0729 10:52:08.043274    8229 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:52:08.043285    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:52:08.083298    8229 logs.go:123] Gathering logs for kube-scheduler [6565a6abc140] ...
	I0729 10:52:08.083310    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6565a6abc140"
	I0729 10:52:08.095237    8229 logs.go:123] Gathering logs for kube-controller-manager [4aa9b4b13ef3] ...
	I0729 10:52:08.095251    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4aa9b4b13ef3"
	I0729 10:52:08.114335    8229 logs.go:123] Gathering logs for kube-controller-manager [87d43b7d580e] ...
	I0729 10:52:08.114345    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87d43b7d580e"
	I0729 10:52:08.126068    8229 logs.go:123] Gathering logs for kube-apiserver [8c4ad5249bc8] ...
	I0729 10:52:08.126080    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c4ad5249bc8"
	I0729 10:52:08.143725    8229 logs.go:123] Gathering logs for kube-proxy [7243039f43b7] ...
	I0729 10:52:08.143735    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7243039f43b7"
	I0729 10:52:08.159160    8229 logs.go:123] Gathering logs for kubelet ...
	I0729 10:52:08.159170    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:52:08.194396    8229 logs.go:123] Gathering logs for kube-apiserver [90622bb860e2] ...
	I0729 10:52:08.194412    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90622bb860e2"
	I0729 10:52:08.214638    8229 logs.go:123] Gathering logs for etcd [c4b3c8945276] ...
	I0729 10:52:08.214652    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4b3c8945276"
	I0729 10:52:08.228454    8229 logs.go:123] Gathering logs for coredns [92f05bbf9ced] ...
	I0729 10:52:08.228466    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92f05bbf9ced"
	I0729 10:52:08.243143    8229 logs.go:123] Gathering logs for kube-scheduler [7cc1c8aea7f7] ...
	I0729 10:52:08.243155    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc1c8aea7f7"
	I0729 10:52:08.255988    8229 logs.go:123] Gathering logs for Docker ...
	I0729 10:52:08.256002    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:52:08.280201    8229 logs.go:123] Gathering logs for container status ...
	I0729 10:52:08.280212    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:52:07.978332    8358 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 10:52:07.978350    8358 buildroot.go:166] provisioning hostname "stopped-upgrade-294000"
	I0729 10:52:07.978396    8358 main.go:141] libmachine: Using SSH client type: native
	I0729 10:52:07.978518    8358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100d2ea10] 0x100d31270 <nil>  [] 0s} localhost 51439 <nil> <nil>}
	I0729 10:52:07.978529    8358 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-294000 && echo "stopped-upgrade-294000" | sudo tee /etc/hostname
	I0729 10:52:08.040506    8358 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-294000
	
	I0729 10:52:08.040569    8358 main.go:141] libmachine: Using SSH client type: native
	I0729 10:52:08.040701    8358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100d2ea10] 0x100d31270 <nil>  [] 0s} localhost 51439 <nil> <nil>}
	I0729 10:52:08.040710    8358 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-294000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-294000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-294000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 10:52:08.100281    8358 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 10:52:08.100294    8358 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19339-6071/.minikube CaCertPath:/Users/jenkins/minikube-integration/19339-6071/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19339-6071/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19339-6071/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19339-6071/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19339-6071/.minikube}
	I0729 10:52:08.100315    8358 buildroot.go:174] setting up certificates
	I0729 10:52:08.100320    8358 provision.go:84] configureAuth start
	I0729 10:52:08.100328    8358 provision.go:143] copyHostCerts
	I0729 10:52:08.100398    8358 exec_runner.go:144] found /Users/jenkins/minikube-integration/19339-6071/.minikube/ca.pem, removing ...
	I0729 10:52:08.100405    8358 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19339-6071/.minikube/ca.pem
	I0729 10:52:08.100784    8358 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19339-6071/.minikube/ca.pem (1078 bytes)
	I0729 10:52:08.100964    8358 exec_runner.go:144] found /Users/jenkins/minikube-integration/19339-6071/.minikube/cert.pem, removing ...
	I0729 10:52:08.100971    8358 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19339-6071/.minikube/cert.pem
	I0729 10:52:08.101016    8358 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19339-6071/.minikube/cert.pem (1123 bytes)
	I0729 10:52:08.101109    8358 exec_runner.go:144] found /Users/jenkins/minikube-integration/19339-6071/.minikube/key.pem, removing ...
	I0729 10:52:08.101112    8358 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19339-6071/.minikube/key.pem
	I0729 10:52:08.101154    8358 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19339-6071/.minikube/key.pem (1675 bytes)
	I0729 10:52:08.101237    8358 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19339-6071/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19339-6071/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-294000 san=[127.0.0.1 localhost minikube stopped-upgrade-294000]
	I0729 10:52:08.238705    8358 provision.go:177] copyRemoteCerts
	I0729 10:52:08.238765    8358 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 10:52:08.238774    8358 sshutil.go:53] new ssh client: &{IP:localhost Port:51439 SSHKeyPath:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/stopped-upgrade-294000/id_rsa Username:docker}
	I0729 10:52:08.270023    8358 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 10:52:08.278014    8358 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0729 10:52:08.285914    8358 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 10:52:08.293952    8358 provision.go:87] duration metric: took 193.628125ms to configureAuth
	I0729 10:52:08.293964    8358 buildroot.go:189] setting minikube options for container-runtime
	I0729 10:52:08.294111    8358 config.go:182] Loaded profile config "stopped-upgrade-294000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 10:52:08.294148    8358 main.go:141] libmachine: Using SSH client type: native
	I0729 10:52:08.294243    8358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100d2ea10] 0x100d31270 <nil>  [] 0s} localhost 51439 <nil> <nil>}
	I0729 10:52:08.294249    8358 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0729 10:52:08.354928    8358 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0729 10:52:08.354938    8358 buildroot.go:70] root file system type: tmpfs
	I0729 10:52:08.354988    8358 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0729 10:52:08.355034    8358 main.go:141] libmachine: Using SSH client type: native
	I0729 10:52:08.355139    8358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100d2ea10] 0x100d31270 <nil>  [] 0s} localhost 51439 <nil> <nil>}
	I0729 10:52:08.355176    8358 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0729 10:52:08.418601    8358 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0729 10:52:08.418660    8358 main.go:141] libmachine: Using SSH client type: native
	I0729 10:52:08.418785    8358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100d2ea10] 0x100d31270 <nil>  [] 0s} localhost 51439 <nil> <nil>}
	I0729 10:52:08.418793    8358 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0729 10:52:08.794118    8358 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0729 10:52:08.794133    8358 machine.go:97] duration metric: took 882.58875ms to provisionDockerMachine
	I0729 10:52:08.794139    8358 start.go:293] postStartSetup for "stopped-upgrade-294000" (driver="qemu2")
	I0729 10:52:08.794145    8358 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 10:52:08.794214    8358 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 10:52:08.794225    8358 sshutil.go:53] new ssh client: &{IP:localhost Port:51439 SSHKeyPath:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/stopped-upgrade-294000/id_rsa Username:docker}
	I0729 10:52:08.828772    8358 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 10:52:08.829936    8358 info.go:137] Remote host: Buildroot 2021.02.12
	I0729 10:52:08.829943    8358 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19339-6071/.minikube/addons for local assets ...
	I0729 10:52:08.830020    8358 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19339-6071/.minikube/files for local assets ...
	I0729 10:52:08.830109    8358 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19339-6071/.minikube/files/etc/ssl/certs/65432.pem -> 65432.pem in /etc/ssl/certs
	I0729 10:52:08.830200    8358 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 10:52:08.833263    8358 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19339-6071/.minikube/files/etc/ssl/certs/65432.pem --> /etc/ssl/certs/65432.pem (1708 bytes)
	I0729 10:52:08.840191    8358 start.go:296] duration metric: took 46.048291ms for postStartSetup
	I0729 10:52:08.840206    8358 fix.go:56] duration metric: took 20.814213542s for fixHost
	I0729 10:52:08.840245    8358 main.go:141] libmachine: Using SSH client type: native
	I0729 10:52:08.840349    8358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100d2ea10] 0x100d31270 <nil>  [] 0s} localhost 51439 <nil> <nil>}
	I0729 10:52:08.840353    8358 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 10:52:08.899046    8358 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722275529.005586712
	
	I0729 10:52:08.899058    8358 fix.go:216] guest clock: 1722275529.005586712
	I0729 10:52:08.899062    8358 fix.go:229] Guest: 2024-07-29 10:52:09.005586712 -0700 PDT Remote: 2024-07-29 10:52:08.840208 -0700 PDT m=+20.935016751 (delta=165.378712ms)
	I0729 10:52:08.899074    8358 fix.go:200] guest clock delta is within tolerance: 165.378712ms
	I0729 10:52:08.899077    8358 start.go:83] releasing machines lock for "stopped-upgrade-294000", held for 20.87309625s
	I0729 10:52:08.899151    8358 ssh_runner.go:195] Run: cat /version.json
	I0729 10:52:08.899160    8358 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 10:52:08.899159    8358 sshutil.go:53] new ssh client: &{IP:localhost Port:51439 SSHKeyPath:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/stopped-upgrade-294000/id_rsa Username:docker}
	I0729 10:52:08.899179    8358 sshutil.go:53] new ssh client: &{IP:localhost Port:51439 SSHKeyPath:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/stopped-upgrade-294000/id_rsa Username:docker}
	W0729 10:52:08.899695    8358 sshutil.go:64] dial failure (will retry): dial tcp [::1]:51439: connect: connection refused
	I0729 10:52:08.899716    8358 retry.go:31] will retry after 195.722751ms: dial tcp [::1]:51439: connect: connection refused
	W0729 10:52:09.134267    8358 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0729 10:52:09.134350    8358 ssh_runner.go:195] Run: systemctl --version
	I0729 10:52:09.136915    8358 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 10:52:09.138895    8358 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 10:52:09.138926    8358 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0729 10:52:09.142521    8358 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0729 10:52:09.148146    8358 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 10:52:09.148161    8358 start.go:495] detecting cgroup driver to use...
	I0729 10:52:09.148244    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 10:52:09.155415    8358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0729 10:52:09.158639    8358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0729 10:52:09.161767    8358 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0729 10:52:09.161792    8358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0729 10:52:09.165119    8358 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0729 10:52:09.168374    8358 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0729 10:52:09.171181    8358 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0729 10:52:09.174058    8358 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 10:52:09.177434    8358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0729 10:52:09.180750    8358 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0729 10:52:09.183670    8358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0729 10:52:09.186379    8358 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 10:52:09.189570    8358 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 10:52:09.192788    8358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 10:52:09.274385    8358 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0729 10:52:09.280577    8358 start.go:495] detecting cgroup driver to use...
	I0729 10:52:09.280649    8358 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0729 10:52:09.286733    8358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 10:52:09.291603    8358 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 10:52:09.304770    8358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 10:52:09.309381    8358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0729 10:52:09.314099    8358 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0729 10:52:09.373258    8358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0729 10:52:09.378715    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 10:52:09.384403    8358 ssh_runner.go:195] Run: which cri-dockerd
	I0729 10:52:09.385638    8358 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0729 10:52:09.388439    8358 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0729 10:52:09.392729    8358 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0729 10:52:09.468830    8358 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0729 10:52:09.543159    8358 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0729 10:52:09.543225    8358 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0729 10:52:09.548270    8358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 10:52:09.624824    8358 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0729 10:52:10.776069    8358 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.151246583s)
	I0729 10:52:10.776122    8358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0729 10:52:10.783806    8358 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0729 10:52:10.789670    8358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0729 10:52:10.794469    8358 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0729 10:52:10.870749    8358 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0729 10:52:10.946683    8358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 10:52:11.026423    8358 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0729 10:52:11.032486    8358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0729 10:52:11.036713    8358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 10:52:11.111306    8358 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0729 10:52:11.153242    8358 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0729 10:52:11.153327    8358 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0729 10:52:11.155453    8358 start.go:563] Will wait 60s for crictl version
	I0729 10:52:11.155482    8358 ssh_runner.go:195] Run: which crictl
	I0729 10:52:11.156811    8358 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 10:52:11.170612    8358 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0729 10:52:11.170677    8358 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0729 10:52:11.186976    8358 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0729 10:52:11.207281    8358 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0729 10:52:11.207421    8358 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0729 10:52:11.208773    8358 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 10:52:11.212763    8358 kubeadm.go:883] updating cluster {Name:stopped-upgrade-294000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51474 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-294000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0729 10:52:11.212817    8358 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0729 10:52:11.212859    8358 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0729 10:52:11.223352    8358 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0729 10:52:11.223360    8358 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0729 10:52:11.223407    8358 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0729 10:52:11.226517    8358 ssh_runner.go:195] Run: which lz4
	I0729 10:52:11.227878    8358 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 10:52:11.229163    8358 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 10:52:11.229175    8358 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0729 10:52:12.154326    8358 docker.go:649] duration metric: took 926.501042ms to copy over tarball
	I0729 10:52:12.154384    8358 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 10:52:10.794488    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:52:13.329464    8358 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.175084291s)
	I0729 10:52:13.329476    8358 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 10:52:13.345524    8358 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0729 10:52:13.348877    8358 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0729 10:52:13.353978    8358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 10:52:13.431846    8358 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0729 10:52:14.946015    8358 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.514178667s)
	I0729 10:52:14.946114    8358 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0729 10:52:14.960344    8358 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0729 10:52:14.960354    8358 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0729 10:52:14.960359    8358 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 10:52:14.965870    8358 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 10:52:14.967816    8358 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0729 10:52:14.970062    8358 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 10:52:14.970355    8358 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 10:52:14.971596    8358 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0729 10:52:14.971767    8358 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0729 10:52:14.971853    8358 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0729 10:52:14.973231    8358 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0729 10:52:14.973689    8358 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 10:52:14.975577    8358 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 10:52:14.975662    8358 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0729 10:52:14.975810    8358 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0729 10:52:14.977340    8358 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0729 10:52:14.977472    8358 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0729 10:52:14.978965    8358 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 10:52:14.979886    8358 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0729 10:52:15.386670    8358 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0729 10:52:15.386810    8358 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0729 10:52:15.396205    8358 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 10:52:15.396682    8358 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0729 10:52:15.400896    8358 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0729 10:52:15.400922    8358 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0729 10:52:15.400959    8358 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0729 10:52:15.403257    8358 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0729 10:52:15.403280    8358 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0729 10:52:15.403314    8358 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0729 10:52:15.419804    8358 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0729 10:52:15.426301    8358 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0729 10:52:15.426322    8358 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 10:52:15.426304    8358 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0729 10:52:15.426366    8358 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0729 10:52:15.426344    8358 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0729 10:52:15.426388    8358 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 10:52:15.426391    8358 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0729 10:52:15.430783    8358 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	W0729 10:52:15.431251    8358 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0729 10:52:15.431353    8358 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0729 10:52:15.435522    8358 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0729 10:52:15.435542    8358 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0729 10:52:15.435587    8358 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0729 10:52:15.449959    8358 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0729 10:52:15.455634    8358 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0729 10:52:15.455664    8358 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0729 10:52:15.455683    8358 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 10:52:15.455721    8358 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0729 10:52:15.455732    8358 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0729 10:52:15.455817    8358 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0729 10:52:15.466194    8358 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0729 10:52:15.466218    8358 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0729 10:52:15.466290    8358 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0729 10:52:15.466383    8358 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0729 10:52:15.467943    8358 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0729 10:52:15.467953    8358 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0729 10:52:15.470527    8358 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0729 10:52:15.483250    8358 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0729 10:52:15.483266    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0729 10:52:15.522738    8358 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0729 10:52:15.522760    8358 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0729 10:52:15.522816    8358 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0729 10:52:15.538803    8358 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0729 10:52:15.538821    8358 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0729 10:52:15.538827    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0729 10:52:15.539413    8358 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0729 10:52:15.539520    8358 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0729 10:52:15.577698    8358 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0729 10:52:15.577738    8358 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0729 10:52:15.577759    8358 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	W0729 10:52:15.623863    8358 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0729 10:52:15.623977    8358 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 10:52:15.653134    8358 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0729 10:52:15.653160    8358 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 10:52:15.653238    8358 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 10:52:15.686803    8358 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0729 10:52:15.686927    8358 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0729 10:52:15.700690    8358 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0729 10:52:15.700719    8358 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0729 10:52:15.764235    8358 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0729 10:52:15.764253    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0729 10:52:16.142390    8358 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0729 10:52:16.142418    8358 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0729 10:52:16.142460    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0729 10:52:16.296176    8358 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0729 10:52:16.296216    8358 cache_images.go:92] duration metric: took 1.335872667s to LoadCachedImages
	W0729 10:52:16.296264    8358 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0729 10:52:16.296272    8358 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0729 10:52:16.296326    8358 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-294000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-294000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 10:52:16.296389    8358 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0729 10:52:16.309893    8358 cni.go:84] Creating CNI manager for ""
	I0729 10:52:16.309907    8358 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 10:52:16.309915    8358 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 10:52:16.309924    8358 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-294000 NodeName:stopped-upgrade-294000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 10:52:16.309990    8358 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-294000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 10:52:16.310048    8358 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0729 10:52:16.313481    8358 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 10:52:16.313509    8358 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 10:52:16.316529    8358 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0729 10:52:16.321746    8358 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 10:52:16.326984    8358 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0729 10:52:16.332029    8358 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0729 10:52:16.333255    8358 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 10:52:16.337222    8358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 10:52:16.419900    8358 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 10:52:16.425057    8358 certs.go:68] Setting up /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/stopped-upgrade-294000 for IP: 10.0.2.15
	I0729 10:52:16.425069    8358 certs.go:194] generating shared ca certs ...
	I0729 10:52:16.425077    8358 certs.go:226] acquiring lock for ca certs: {Name:mkd86fdb55ccc20c129297fd51f66c0e2f8e203c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:52:16.425255    8358 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19339-6071/.minikube/ca.key
	I0729 10:52:16.425305    8358 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19339-6071/.minikube/proxy-client-ca.key
	I0729 10:52:16.425312    8358 certs.go:256] generating profile certs ...
	I0729 10:52:16.425391    8358 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/stopped-upgrade-294000/client.key
	I0729 10:52:16.425416    8358 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/stopped-upgrade-294000/apiserver.key.31b6761a
	I0729 10:52:16.425428    8358 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/stopped-upgrade-294000/apiserver.crt.31b6761a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0729 10:52:16.528637    8358 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/stopped-upgrade-294000/apiserver.crt.31b6761a ...
	I0729 10:52:16.528650    8358 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/stopped-upgrade-294000/apiserver.crt.31b6761a: {Name:mkf96fc44bc0a8ea540ede29386cc4783d1d43aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:52:16.528972    8358 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/stopped-upgrade-294000/apiserver.key.31b6761a ...
	I0729 10:52:16.528977    8358 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/stopped-upgrade-294000/apiserver.key.31b6761a: {Name:mka64950b9c5d8430ac7b24db40a506627f9be36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:52:16.529116    8358 certs.go:381] copying /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/stopped-upgrade-294000/apiserver.crt.31b6761a -> /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/stopped-upgrade-294000/apiserver.crt
	I0729 10:52:16.529241    8358 certs.go:385] copying /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/stopped-upgrade-294000/apiserver.key.31b6761a -> /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/stopped-upgrade-294000/apiserver.key
	I0729 10:52:16.529386    8358 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/stopped-upgrade-294000/proxy-client.key
	I0729 10:52:16.529519    8358 certs.go:484] found cert: /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/6543.pem (1338 bytes)
	W0729 10:52:16.529548    8358 certs.go:480] ignoring /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/6543_empty.pem, impossibly tiny 0 bytes
	I0729 10:52:16.529553    8358 certs.go:484] found cert: /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 10:52:16.529583    8358 certs.go:484] found cert: /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/ca.pem (1078 bytes)
	I0729 10:52:16.529614    8358 certs.go:484] found cert: /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/cert.pem (1123 bytes)
	I0729 10:52:16.529642    8358 certs.go:484] found cert: /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/key.pem (1675 bytes)
	I0729 10:52:16.529700    8358 certs.go:484] found cert: /Users/jenkins/minikube-integration/19339-6071/.minikube/files/etc/ssl/certs/65432.pem (1708 bytes)
	I0729 10:52:16.530046    8358 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19339-6071/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 10:52:16.537514    8358 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19339-6071/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 10:52:16.545285    8358 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19339-6071/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 10:52:16.552926    8358 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19339-6071/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 10:52:16.559632    8358 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/stopped-upgrade-294000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0729 10:52:16.566234    8358 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/stopped-upgrade-294000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 10:52:16.573505    8358 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/stopped-upgrade-294000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 10:52:16.581082    8358 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/stopped-upgrade-294000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 10:52:16.588463    8358 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19339-6071/.minikube/files/etc/ssl/certs/65432.pem --> /usr/share/ca-certificates/65432.pem (1708 bytes)
	I0729 10:52:16.595190    8358 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19339-6071/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 10:52:16.601796    8358 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/6543.pem --> /usr/share/ca-certificates/6543.pem (1338 bytes)
	I0729 10:52:16.609068    8358 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 10:52:16.614209    8358 ssh_runner.go:195] Run: openssl version
	I0729 10:52:16.616071    8358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 10:52:16.618853    8358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 10:52:16.620306    8358 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 17:48 /usr/share/ca-certificates/minikubeCA.pem
	I0729 10:52:16.620325    8358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 10:52:16.622006    8358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 10:52:16.625230    8358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6543.pem && ln -fs /usr/share/ca-certificates/6543.pem /etc/ssl/certs/6543.pem"
	I0729 10:52:16.628403    8358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6543.pem
	I0729 10:52:16.629692    8358 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 17:36 /usr/share/ca-certificates/6543.pem
	I0729 10:52:16.629713    8358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6543.pem
	I0729 10:52:16.631603    8358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6543.pem /etc/ssl/certs/51391683.0"
	I0729 10:52:16.634343    8358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/65432.pem && ln -fs /usr/share/ca-certificates/65432.pem /etc/ssl/certs/65432.pem"
	I0729 10:52:16.637581    8358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/65432.pem
	I0729 10:52:16.639076    8358 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 17:36 /usr/share/ca-certificates/65432.pem
	I0729 10:52:16.639102    8358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/65432.pem
	I0729 10:52:16.640896    8358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/65432.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 10:52:16.643767    8358 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 10:52:16.645138    8358 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 10:52:16.647369    8358 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 10:52:16.649183    8358 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 10:52:16.651230    8358 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 10:52:16.652914    8358 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 10:52:16.654645    8358 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 10:52:16.656676    8358 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-294000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51474 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-294000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0729 10:52:16.656742    8358 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0729 10:52:16.666784    8358 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 10:52:16.670083    8358 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 10:52:16.670088    8358 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 10:52:16.670109    8358 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 10:52:16.672984    8358 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 10:52:16.673291    8358 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-294000" does not appear in /Users/jenkins/minikube-integration/19339-6071/kubeconfig
	I0729 10:52:16.673391    8358 kubeconfig.go:62] /Users/jenkins/minikube-integration/19339-6071/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-294000" cluster setting kubeconfig missing "stopped-upgrade-294000" context setting]
	I0729 10:52:16.673586    8358 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19339-6071/kubeconfig: {Name:mkf75fdff2d3e918223b7f2dbeb4359c01007a16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:52:16.674046    8358 kapi.go:59] client config for stopped-upgrade-294000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/stopped-upgrade-294000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/stopped-upgrade-294000/client.key", CAFile:"/Users/jenkins/minikube-integration/19339-6071/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1020c4080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 10:52:16.674386    8358 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 10:52:16.677168    8358 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-294000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0729 10:52:16.677174    8358 kubeadm.go:1160] stopping kube-system containers ...
	I0729 10:52:16.677213    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0729 10:52:16.688114    8358 docker.go:483] Stopping containers: [a9a637b09ebc 81df750d149b bb4196aefa69 4494551802a6 2afc138a6e36 734c1aa632b5 07079e9404aa d6f86f1633f4]
	I0729 10:52:16.688175    8358 ssh_runner.go:195] Run: docker stop a9a637b09ebc 81df750d149b bb4196aefa69 4494551802a6 2afc138a6e36 734c1aa632b5 07079e9404aa d6f86f1633f4
	I0729 10:52:16.698803    8358 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 10:52:16.704569    8358 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 10:52:16.707466    8358 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 10:52:16.707472    8358 kubeadm.go:157] found existing configuration files:
	
	I0729 10:52:16.707492    8358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51474 /etc/kubernetes/admin.conf
	I0729 10:52:16.710022    8358 kubeadm.go:163] "https://control-plane.minikube.internal:51474" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51474 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 10:52:16.710049    8358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 10:52:16.712896    8358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51474 /etc/kubernetes/kubelet.conf
	I0729 10:52:16.715412    8358 kubeadm.go:163] "https://control-plane.minikube.internal:51474" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51474 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 10:52:16.715436    8358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 10:52:16.718077    8358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51474 /etc/kubernetes/controller-manager.conf
	I0729 10:52:16.722048    8358 kubeadm.go:163] "https://control-plane.minikube.internal:51474" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51474 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 10:52:16.722070    8358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 10:52:16.725192    8358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51474 /etc/kubernetes/scheduler.conf
	I0729 10:52:16.728017    8358 kubeadm.go:163] "https://control-plane.minikube.internal:51474" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51474 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 10:52:16.728039    8358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 10:52:16.730706    8358 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 10:52:16.733976    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 10:52:16.756526    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 10:52:17.295983    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 10:52:17.430233    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 10:52:17.449202    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 10:52:17.471908    8358 api_server.go:52] waiting for apiserver process to appear ...
	I0729 10:52:17.471981    8358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 10:52:15.794603    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:52:15.794786    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:52:15.806596    8229 logs.go:276] 2 containers: [8c4ad5249bc8 90622bb860e2]
	I0729 10:52:15.806671    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:52:15.818427    8229 logs.go:276] 2 containers: [c3e1f9023336 c4b3c8945276]
	I0729 10:52:15.818504    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:52:15.831323    8229 logs.go:276] 1 containers: [92f05bbf9ced]
	I0729 10:52:15.831397    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:52:15.846398    8229 logs.go:276] 2 containers: [7cc1c8aea7f7 6565a6abc140]
	I0729 10:52:15.846473    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:52:15.858021    8229 logs.go:276] 1 containers: [7243039f43b7]
	I0729 10:52:15.858100    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:52:15.878597    8229 logs.go:276] 2 containers: [4aa9b4b13ef3 87d43b7d580e]
	I0729 10:52:15.878668    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:52:15.889590    8229 logs.go:276] 0 containers: []
	W0729 10:52:15.889601    8229 logs.go:278] No container was found matching "kindnet"
	I0729 10:52:15.889666    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:52:15.901057    8229 logs.go:276] 1 containers: [fcf6defc29a4]
	I0729 10:52:15.901074    8229 logs.go:123] Gathering logs for container status ...
	I0729 10:52:15.901080    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:52:15.916852    8229 logs.go:123] Gathering logs for kube-scheduler [7cc1c8aea7f7] ...
	I0729 10:52:15.916869    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc1c8aea7f7"
	I0729 10:52:15.930369    8229 logs.go:123] Gathering logs for kube-proxy [7243039f43b7] ...
	I0729 10:52:15.930381    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7243039f43b7"
	I0729 10:52:15.945130    8229 logs.go:123] Gathering logs for kube-controller-manager [4aa9b4b13ef3] ...
	I0729 10:52:15.945158    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4aa9b4b13ef3"
	I0729 10:52:15.964565    8229 logs.go:123] Gathering logs for Docker ...
	I0729 10:52:15.964576    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:52:15.989172    8229 logs.go:123] Gathering logs for kubelet ...
	I0729 10:52:15.989188    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:52:16.029428    8229 logs.go:123] Gathering logs for kube-apiserver [8c4ad5249bc8] ...
	I0729 10:52:16.029441    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c4ad5249bc8"
	I0729 10:52:16.043911    8229 logs.go:123] Gathering logs for etcd [c3e1f9023336] ...
	I0729 10:52:16.043922    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3e1f9023336"
	I0729 10:52:16.059130    8229 logs.go:123] Gathering logs for etcd [c4b3c8945276] ...
	I0729 10:52:16.059140    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4b3c8945276"
	I0729 10:52:16.073273    8229 logs.go:123] Gathering logs for coredns [92f05bbf9ced] ...
	I0729 10:52:16.073285    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92f05bbf9ced"
	I0729 10:52:16.084937    8229 logs.go:123] Gathering logs for kube-controller-manager [87d43b7d580e] ...
	I0729 10:52:16.084949    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87d43b7d580e"
	I0729 10:52:16.097198    8229 logs.go:123] Gathering logs for storage-provisioner [fcf6defc29a4] ...
	I0729 10:52:16.097215    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcf6defc29a4"
	I0729 10:52:16.110661    8229 logs.go:123] Gathering logs for dmesg ...
	I0729 10:52:16.110676    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:52:16.115303    8229 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:52:16.115310    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:52:16.152333    8229 logs.go:123] Gathering logs for kube-apiserver [90622bb860e2] ...
	I0729 10:52:16.152345    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90622bb860e2"
	I0729 10:52:16.174869    8229 logs.go:123] Gathering logs for kube-scheduler [6565a6abc140] ...
	I0729 10:52:16.174883    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6565a6abc140"
	I0729 10:52:18.689722    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:52:17.973346    8358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 10:52:18.474031    8358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 10:52:18.478415    8358 api_server.go:72] duration metric: took 1.00652525s to wait for apiserver process to appear ...
	I0729 10:52:18.478424    8358 api_server.go:88] waiting for apiserver healthz status ...
	I0729 10:52:18.478434    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:52:23.691848    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:52:23.692072    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:52:23.715962    8229 logs.go:276] 2 containers: [8c4ad5249bc8 90622bb860e2]
	I0729 10:52:23.716058    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:52:23.730292    8229 logs.go:276] 2 containers: [c3e1f9023336 c4b3c8945276]
	I0729 10:52:23.730364    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:52:23.741650    8229 logs.go:276] 1 containers: [92f05bbf9ced]
	I0729 10:52:23.741712    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:52:23.752009    8229 logs.go:276] 2 containers: [7cc1c8aea7f7 6565a6abc140]
	I0729 10:52:23.752070    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:52:23.762550    8229 logs.go:276] 1 containers: [7243039f43b7]
	I0729 10:52:23.762621    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:52:23.773346    8229 logs.go:276] 2 containers: [4aa9b4b13ef3 87d43b7d580e]
	I0729 10:52:23.773410    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:52:23.788421    8229 logs.go:276] 0 containers: []
	W0729 10:52:23.788436    8229 logs.go:278] No container was found matching "kindnet"
	I0729 10:52:23.788495    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:52:23.800456    8229 logs.go:276] 1 containers: [fcf6defc29a4]
	I0729 10:52:23.800475    8229 logs.go:123] Gathering logs for kubelet ...
	I0729 10:52:23.800481    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:52:23.835508    8229 logs.go:123] Gathering logs for kube-apiserver [8c4ad5249bc8] ...
	I0729 10:52:23.835528    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c4ad5249bc8"
	I0729 10:52:23.850057    8229 logs.go:123] Gathering logs for kube-scheduler [7cc1c8aea7f7] ...
	I0729 10:52:23.850068    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc1c8aea7f7"
	I0729 10:52:23.861855    8229 logs.go:123] Gathering logs for kube-scheduler [6565a6abc140] ...
	I0729 10:52:23.861865    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6565a6abc140"
	I0729 10:52:23.872496    8229 logs.go:123] Gathering logs for storage-provisioner [fcf6defc29a4] ...
	I0729 10:52:23.872513    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcf6defc29a4"
	I0729 10:52:23.886799    8229 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:52:23.886810    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:52:23.921239    8229 logs.go:123] Gathering logs for coredns [92f05bbf9ced] ...
	I0729 10:52:23.921250    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92f05bbf9ced"
	I0729 10:52:23.932425    8229 logs.go:123] Gathering logs for kube-proxy [7243039f43b7] ...
	I0729 10:52:23.932436    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7243039f43b7"
	I0729 10:52:23.944060    8229 logs.go:123] Gathering logs for kube-controller-manager [4aa9b4b13ef3] ...
	I0729 10:52:23.944071    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4aa9b4b13ef3"
	I0729 10:52:23.963019    8229 logs.go:123] Gathering logs for kube-controller-manager [87d43b7d580e] ...
	I0729 10:52:23.963029    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87d43b7d580e"
	I0729 10:52:23.974326    8229 logs.go:123] Gathering logs for container status ...
	I0729 10:52:23.974339    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:52:23.987835    8229 logs.go:123] Gathering logs for dmesg ...
	I0729 10:52:23.987846    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:52:23.992681    8229 logs.go:123] Gathering logs for kube-apiserver [90622bb860e2] ...
	I0729 10:52:23.992688    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90622bb860e2"
	I0729 10:52:24.014657    8229 logs.go:123] Gathering logs for etcd [c3e1f9023336] ...
	I0729 10:52:24.014671    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3e1f9023336"
	I0729 10:52:24.028329    8229 logs.go:123] Gathering logs for etcd [c4b3c8945276] ...
	I0729 10:52:24.028353    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4b3c8945276"
	I0729 10:52:24.041648    8229 logs.go:123] Gathering logs for Docker ...
	I0729 10:52:24.041661    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:52:23.479286    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:52:23.479369    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:52:26.566731    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:52:28.480357    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:52:28.480419    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:52:31.568905    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:52:31.569172    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:52:31.594581    8229 logs.go:276] 2 containers: [8c4ad5249bc8 90622bb860e2]
	I0729 10:52:31.594684    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:52:31.612142    8229 logs.go:276] 2 containers: [c3e1f9023336 c4b3c8945276]
	I0729 10:52:31.612232    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:52:31.625854    8229 logs.go:276] 1 containers: [92f05bbf9ced]
	I0729 10:52:31.625931    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:52:31.638553    8229 logs.go:276] 2 containers: [7cc1c8aea7f7 6565a6abc140]
	I0729 10:52:31.638630    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:52:31.649344    8229 logs.go:276] 1 containers: [7243039f43b7]
	I0729 10:52:31.649409    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:52:31.659593    8229 logs.go:276] 2 containers: [4aa9b4b13ef3 87d43b7d580e]
	I0729 10:52:31.659664    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:52:31.669778    8229 logs.go:276] 0 containers: []
	W0729 10:52:31.669791    8229 logs.go:278] No container was found matching "kindnet"
	I0729 10:52:31.669851    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:52:31.679938    8229 logs.go:276] 1 containers: [fcf6defc29a4]
	I0729 10:52:31.679957    8229 logs.go:123] Gathering logs for dmesg ...
	I0729 10:52:31.679963    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:52:31.685571    8229 logs.go:123] Gathering logs for kube-scheduler [6565a6abc140] ...
	I0729 10:52:31.685580    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6565a6abc140"
	I0729 10:52:31.697722    8229 logs.go:123] Gathering logs for kube-controller-manager [87d43b7d580e] ...
	I0729 10:52:31.697733    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87d43b7d580e"
	I0729 10:52:31.709623    8229 logs.go:123] Gathering logs for Docker ...
	I0729 10:52:31.709637    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:52:31.735341    8229 logs.go:123] Gathering logs for kubelet ...
	I0729 10:52:31.735350    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:52:31.768972    8229 logs.go:123] Gathering logs for kube-scheduler [7cc1c8aea7f7] ...
	I0729 10:52:31.768980    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc1c8aea7f7"
	I0729 10:52:31.780552    8229 logs.go:123] Gathering logs for storage-provisioner [fcf6defc29a4] ...
	I0729 10:52:31.780564    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcf6defc29a4"
	I0729 10:52:31.792391    8229 logs.go:123] Gathering logs for container status ...
	I0729 10:52:31.792404    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:52:31.804461    8229 logs.go:123] Gathering logs for etcd [c4b3c8945276] ...
	I0729 10:52:31.804472    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4b3c8945276"
	I0729 10:52:31.817785    8229 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:52:31.817797    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:52:31.856630    8229 logs.go:123] Gathering logs for kube-apiserver [8c4ad5249bc8] ...
	I0729 10:52:31.856642    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c4ad5249bc8"
	I0729 10:52:31.870624    8229 logs.go:123] Gathering logs for kube-apiserver [90622bb860e2] ...
	I0729 10:52:31.870637    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90622bb860e2"
	I0729 10:52:31.891376    8229 logs.go:123] Gathering logs for etcd [c3e1f9023336] ...
	I0729 10:52:31.891390    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3e1f9023336"
	I0729 10:52:31.907832    8229 logs.go:123] Gathering logs for coredns [92f05bbf9ced] ...
	I0729 10:52:31.907846    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92f05bbf9ced"
	I0729 10:52:31.919422    8229 logs.go:123] Gathering logs for kube-proxy [7243039f43b7] ...
	I0729 10:52:31.919434    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7243039f43b7"
	I0729 10:52:31.931365    8229 logs.go:123] Gathering logs for kube-controller-manager [4aa9b4b13ef3] ...
	I0729 10:52:31.931376    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4aa9b4b13ef3"
	I0729 10:52:34.454223    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:52:33.480628    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:52:33.480676    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:52:39.456540    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:52:39.456679    8229 kubeadm.go:597] duration metric: took 4m2.900966417s to restartPrimaryControlPlane
	W0729 10:52:39.456750    8229 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 10:52:39.456784    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0729 10:52:40.484341    8229 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.027558583s)
	I0729 10:52:40.484396    8229 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 10:52:40.489380    8229 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 10:52:40.492218    8229 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 10:52:40.494901    8229 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 10:52:40.494907    8229 kubeadm.go:157] found existing configuration files:
	
	I0729 10:52:40.494931    8229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51249 /etc/kubernetes/admin.conf
	I0729 10:52:40.497329    8229 kubeadm.go:163] "https://control-plane.minikube.internal:51249" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51249 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 10:52:40.497358    8229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 10:52:40.499887    8229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51249 /etc/kubernetes/kubelet.conf
	I0729 10:52:40.502635    8229 kubeadm.go:163] "https://control-plane.minikube.internal:51249" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51249 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 10:52:40.502659    8229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 10:52:40.505334    8229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51249 /etc/kubernetes/controller-manager.conf
	I0729 10:52:40.507850    8229 kubeadm.go:163] "https://control-plane.minikube.internal:51249" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51249 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 10:52:40.507869    8229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 10:52:40.510942    8229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51249 /etc/kubernetes/scheduler.conf
	I0729 10:52:40.513559    8229 kubeadm.go:163] "https://control-plane.minikube.internal:51249" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51249 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 10:52:40.513581    8229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 10:52:40.516100    8229 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 10:52:40.532877    8229 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0729 10:52:40.532905    8229 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 10:52:40.581948    8229 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 10:52:40.582014    8229 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 10:52:40.582079    8229 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 10:52:40.630856    8229 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 10:52:40.634971    8229 out.go:204]   - Generating certificates and keys ...
	I0729 10:52:40.635004    8229 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 10:52:40.635032    8229 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 10:52:40.635067    8229 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 10:52:40.635094    8229 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 10:52:40.635125    8229 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 10:52:40.635149    8229 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 10:52:40.635178    8229 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 10:52:40.635207    8229 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 10:52:40.635245    8229 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 10:52:40.635283    8229 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 10:52:40.635304    8229 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 10:52:40.635358    8229 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 10:52:40.862663    8229 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 10:52:41.077901    8229 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 10:52:41.184621    8229 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 10:52:41.291372    8229 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 10:52:41.320967    8229 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 10:52:41.321013    8229 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 10:52:41.321033    8229 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 10:52:41.393232    8229 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 10:52:38.481029    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:52:38.481063    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:52:41.397433    8229 out.go:204]   - Booting up control plane ...
	I0729 10:52:41.397488    8229 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 10:52:41.397533    8229 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 10:52:41.397575    8229 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 10:52:41.397620    8229 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 10:52:41.397707    8229 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 10:52:46.397819    8229 kubeadm.go:310] [apiclient] All control plane components are healthy after 5.001970 seconds
	I0729 10:52:46.397900    8229 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 10:52:46.402060    8229 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 10:52:46.911327    8229 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 10:52:46.911429    8229 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-504000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 10:52:47.419322    8229 kubeadm.go:310] [bootstrap-token] Using token: 65pupz.78o58rh3wlo636g0
	I0729 10:52:47.426467    8229 out.go:204]   - Configuring RBAC rules ...
	I0729 10:52:47.426586    8229 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 10:52:47.426679    8229 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 10:52:47.430553    8229 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 10:52:47.432182    8229 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 10:52:47.433715    8229 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 10:52:47.435409    8229 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 10:52:47.440332    8229 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 10:52:47.637058    8229 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 10:52:47.824362    8229 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 10:52:47.824795    8229 kubeadm.go:310] 
	I0729 10:52:47.824827    8229 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 10:52:47.824831    8229 kubeadm.go:310] 
	I0729 10:52:47.824876    8229 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 10:52:47.824882    8229 kubeadm.go:310] 
	I0729 10:52:47.824895    8229 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 10:52:47.824944    8229 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 10:52:47.824970    8229 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 10:52:47.824972    8229 kubeadm.go:310] 
	I0729 10:52:47.825013    8229 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 10:52:47.825016    8229 kubeadm.go:310] 
	I0729 10:52:47.825042    8229 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 10:52:47.825049    8229 kubeadm.go:310] 
	I0729 10:52:47.825073    8229 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 10:52:47.825108    8229 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 10:52:47.825150    8229 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 10:52:47.825155    8229 kubeadm.go:310] 
	I0729 10:52:47.825208    8229 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 10:52:47.825247    8229 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 10:52:47.825250    8229 kubeadm.go:310] 
	I0729 10:52:47.825315    8229 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 65pupz.78o58rh3wlo636g0 \
	I0729 10:52:47.825379    8229 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:8d6a503498cfac617ec351c4234f65718d8cbc12c41bd005a6931d270830028d \
	I0729 10:52:47.825399    8229 kubeadm.go:310] 	--control-plane 
	I0729 10:52:47.825401    8229 kubeadm.go:310] 
	I0729 10:52:47.825445    8229 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 10:52:47.825450    8229 kubeadm.go:310] 
	I0729 10:52:47.825498    8229 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 65pupz.78o58rh3wlo636g0 \
	I0729 10:52:47.825554    8229 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:8d6a503498cfac617ec351c4234f65718d8cbc12c41bd005a6931d270830028d 
	I0729 10:52:47.825645    8229 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 10:52:47.825654    8229 cni.go:84] Creating CNI manager for ""
	I0729 10:52:47.825663    8229 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 10:52:47.830570    8229 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 10:52:47.839575    8229 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 10:52:47.842773    8229 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 10:52:47.847872    8229 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 10:52:47.847931    8229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:52:47.847962    8229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-504000 minikube.k8s.io/updated_at=2024_07_29T10_52_47_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=de53afae5e8f4269438099f4dad14d93a8a17e35 minikube.k8s.io/name=running-upgrade-504000 minikube.k8s.io/primary=true
	I0729 10:52:47.886643    8229 ops.go:34] apiserver oom_adj: -16
	I0729 10:52:47.886762    8229 kubeadm.go:1113] duration metric: took 38.861042ms to wait for elevateKubeSystemPrivileges
	I0729 10:52:47.895855    8229 kubeadm.go:394] duration metric: took 4m11.389148542s to StartCluster
	I0729 10:52:47.895875    8229 settings.go:142] acquiring lock: {Name:mk3ce889c5cdf5c514cbf9155d52acf6d279a087 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:52:47.896034    8229 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19339-6071/kubeconfig
	I0729 10:52:47.896413    8229 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19339-6071/kubeconfig: {Name:mkf75fdff2d3e918223b7f2dbeb4359c01007a16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:52:47.896615    8229 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:52:47.896683    8229 config.go:182] Loaded profile config "running-upgrade-504000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 10:52:47.896720    8229 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 10:52:47.896760    8229 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-504000"
	I0729 10:52:47.896777    8229 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-504000"
	W0729 10:52:47.896780    8229 addons.go:243] addon storage-provisioner should already be in state true
	I0729 10:52:47.896778    8229 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-504000"
	I0729 10:52:47.896822    8229 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-504000"
	I0729 10:52:47.896792    8229 host.go:66] Checking if "running-upgrade-504000" exists ...
	I0729 10:52:47.897725    8229 kapi.go:59] client config for running-upgrade-504000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/running-upgrade-504000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/running-upgrade-504000/client.key", CAFile:"/Users/jenkins/minikube-integration/19339-6071/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x102264080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 10:52:47.897846    8229 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-504000"
	W0729 10:52:47.897850    8229 addons.go:243] addon default-storageclass should already be in state true
	I0729 10:52:47.897860    8229 host.go:66] Checking if "running-upgrade-504000" exists ...
	I0729 10:52:47.900526    8229 out.go:177] * Verifying Kubernetes components...
	I0729 10:52:47.900861    8229 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 10:52:47.903766    8229 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 10:52:47.903774    8229 sshutil.go:53] new ssh client: &{IP:localhost Port:51217 SSHKeyPath:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/running-upgrade-504000/id_rsa Username:docker}
	I0729 10:52:47.906482    8229 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 10:52:43.481522    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:52:43.481585    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:52:47.910558    8229 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 10:52:47.913489    8229 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 10:52:47.913495    8229 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 10:52:47.913500    8229 sshutil.go:53] new ssh client: &{IP:localhost Port:51217 SSHKeyPath:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/running-upgrade-504000/id_rsa Username:docker}
	I0729 10:52:47.989684    8229 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 10:52:47.994280    8229 api_server.go:52] waiting for apiserver process to appear ...
	I0729 10:52:47.994322    8229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 10:52:47.998177    8229 api_server.go:72] duration metric: took 101.552208ms to wait for apiserver process to appear ...
	I0729 10:52:47.998184    8229 api_server.go:88] waiting for apiserver healthz status ...
	I0729 10:52:47.998190    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:52:48.027776    8229 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 10:52:48.043356    8229 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 10:52:48.482183    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:52:48.482201    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:52:53.000265    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:52:53.000334    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:52:53.482954    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:52:53.483031    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:52:58.000827    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:52:58.000856    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:52:58.484816    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:52:58.484845    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:53:03.001236    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:53:03.001285    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:53:03.486261    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:53:03.486305    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:53:08.001742    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:53:08.001786    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:53:08.486708    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:53:08.486787    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:53:13.002465    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:53:13.002506    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:53:13.489315    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:53:13.489381    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:53:18.003331    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:53:18.003350    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0729 10:53:18.357052    8229 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0729 10:53:18.363124    8229 out.go:177] * Enabled addons: storage-provisioner
	I0729 10:53:18.371086    8229 addons.go:510] duration metric: took 30.474922292s for enable addons: enabled=[storage-provisioner]
	I0729 10:53:18.491764    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:53:18.491885    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:53:18.507711    8358 logs.go:276] 2 containers: [14a4ff9d95a0 2afc138a6e36]
	I0729 10:53:18.507800    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:53:18.520240    8358 logs.go:276] 2 containers: [68a6d48feaae 4494551802a6]
	I0729 10:53:18.520309    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:53:18.531563    8358 logs.go:276] 1 containers: [fbff0b09b3af]
	I0729 10:53:18.531632    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:53:18.541665    8358 logs.go:276] 2 containers: [8ad4866a16b3 a9a637b09ebc]
	I0729 10:53:18.541734    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:53:18.551962    8358 logs.go:276] 1 containers: [cc82106fc9da]
	I0729 10:53:18.552037    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:53:18.563288    8358 logs.go:276] 2 containers: [eee01c406f30 81df750d149b]
	I0729 10:53:18.563357    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:53:18.573310    8358 logs.go:276] 0 containers: []
	W0729 10:53:18.573323    8358 logs.go:278] No container was found matching "kindnet"
	I0729 10:53:18.573377    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:53:18.583967    8358 logs.go:276] 2 containers: [7610cd881aeb e2e7fb6e4b2d]
	I0729 10:53:18.583989    8358 logs.go:123] Gathering logs for dmesg ...
	I0729 10:53:18.583995    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:53:18.592090    8358 logs.go:123] Gathering logs for Docker ...
	I0729 10:53:18.592100    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:53:18.615752    8358 logs.go:123] Gathering logs for kubelet ...
	I0729 10:53:18.615764    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:53:18.651908    8358 logs.go:123] Gathering logs for coredns [fbff0b09b3af] ...
	I0729 10:53:18.651917    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff0b09b3af"
	I0729 10:53:18.663265    8358 logs.go:123] Gathering logs for kube-scheduler [a9a637b09ebc] ...
	I0729 10:53:18.663276    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9a637b09ebc"
	I0729 10:53:18.680733    8358 logs.go:123] Gathering logs for kube-controller-manager [81df750d149b] ...
	I0729 10:53:18.680742    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81df750d149b"
	I0729 10:53:18.694923    8358 logs.go:123] Gathering logs for storage-provisioner [e2e7fb6e4b2d] ...
	I0729 10:53:18.694934    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2e7fb6e4b2d"
	I0729 10:53:18.706650    8358 logs.go:123] Gathering logs for container status ...
	I0729 10:53:18.706661    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:53:18.718966    8358 logs.go:123] Gathering logs for kube-apiserver [14a4ff9d95a0] ...
	I0729 10:53:18.718977    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14a4ff9d95a0"
	I0729 10:53:18.733141    8358 logs.go:123] Gathering logs for etcd [4494551802a6] ...
	I0729 10:53:18.733153    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4494551802a6"
	I0729 10:53:18.748478    8358 logs.go:123] Gathering logs for kube-scheduler [8ad4866a16b3] ...
	I0729 10:53:18.748489    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ad4866a16b3"
	I0729 10:53:18.760278    8358 logs.go:123] Gathering logs for kube-proxy [cc82106fc9da] ...
	I0729 10:53:18.760288    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc82106fc9da"
	I0729 10:53:18.778820    8358 logs.go:123] Gathering logs for storage-provisioner [7610cd881aeb] ...
	I0729 10:53:18.778830    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7610cd881aeb"
	I0729 10:53:18.790710    8358 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:53:18.790720    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:53:18.891934    8358 logs.go:123] Gathering logs for kube-apiserver [2afc138a6e36] ...
	I0729 10:53:18.891946    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2afc138a6e36"
	I0729 10:53:18.932681    8358 logs.go:123] Gathering logs for etcd [68a6d48feaae] ...
	I0729 10:53:18.932691    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68a6d48feaae"
	I0729 10:53:18.946573    8358 logs.go:123] Gathering logs for kube-controller-manager [eee01c406f30] ...
	I0729 10:53:18.946584    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eee01c406f30"
	I0729 10:53:21.465747    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:53:23.004382    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:53:23.004433    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:53:26.468422    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:53:26.468804    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:53:26.502304    8358 logs.go:276] 2 containers: [14a4ff9d95a0 2afc138a6e36]
	I0729 10:53:26.502451    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:53:26.522758    8358 logs.go:276] 2 containers: [68a6d48feaae 4494551802a6]
	I0729 10:53:26.522895    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:53:26.537581    8358 logs.go:276] 1 containers: [fbff0b09b3af]
	I0729 10:53:26.537668    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:53:26.553451    8358 logs.go:276] 2 containers: [8ad4866a16b3 a9a637b09ebc]
	I0729 10:53:26.553525    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:53:26.567164    8358 logs.go:276] 1 containers: [cc82106fc9da]
	I0729 10:53:26.567236    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:53:26.580803    8358 logs.go:276] 2 containers: [eee01c406f30 81df750d149b]
	I0729 10:53:26.580872    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:53:26.591346    8358 logs.go:276] 0 containers: []
	W0729 10:53:26.591357    8358 logs.go:278] No container was found matching "kindnet"
	I0729 10:53:26.591416    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:53:26.601728    8358 logs.go:276] 2 containers: [7610cd881aeb e2e7fb6e4b2d]
	I0729 10:53:26.601746    8358 logs.go:123] Gathering logs for kube-controller-manager [eee01c406f30] ...
	I0729 10:53:26.601752    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eee01c406f30"
	I0729 10:53:26.620923    8358 logs.go:123] Gathering logs for storage-provisioner [7610cd881aeb] ...
	I0729 10:53:26.620933    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7610cd881aeb"
	I0729 10:53:26.632343    8358 logs.go:123] Gathering logs for storage-provisioner [e2e7fb6e4b2d] ...
	I0729 10:53:26.632355    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2e7fb6e4b2d"
	I0729 10:53:26.647990    8358 logs.go:123] Gathering logs for kube-controller-manager [81df750d149b] ...
	I0729 10:53:26.647999    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81df750d149b"
	I0729 10:53:26.661292    8358 logs.go:123] Gathering logs for Docker ...
	I0729 10:53:26.661303    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:53:26.685099    8358 logs.go:123] Gathering logs for dmesg ...
	I0729 10:53:26.685107    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:53:26.689566    8358 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:53:26.689573    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:53:26.725355    8358 logs.go:123] Gathering logs for etcd [68a6d48feaae] ...
	I0729 10:53:26.725368    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68a6d48feaae"
	I0729 10:53:26.738958    8358 logs.go:123] Gathering logs for coredns [fbff0b09b3af] ...
	I0729 10:53:26.738970    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff0b09b3af"
	I0729 10:53:26.750583    8358 logs.go:123] Gathering logs for kube-scheduler [a9a637b09ebc] ...
	I0729 10:53:26.750593    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9a637b09ebc"
	I0729 10:53:26.766192    8358 logs.go:123] Gathering logs for kube-proxy [cc82106fc9da] ...
	I0729 10:53:26.766203    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc82106fc9da"
	I0729 10:53:26.777808    8358 logs.go:123] Gathering logs for container status ...
	I0729 10:53:26.777822    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:53:26.790818    8358 logs.go:123] Gathering logs for kube-apiserver [14a4ff9d95a0] ...
	I0729 10:53:26.790832    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14a4ff9d95a0"
	I0729 10:53:26.805127    8358 logs.go:123] Gathering logs for kube-apiserver [2afc138a6e36] ...
	I0729 10:53:26.805140    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2afc138a6e36"
	I0729 10:53:26.843714    8358 logs.go:123] Gathering logs for kube-scheduler [8ad4866a16b3] ...
	I0729 10:53:26.843724    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ad4866a16b3"
	I0729 10:53:26.855831    8358 logs.go:123] Gathering logs for kubelet ...
	I0729 10:53:26.855845    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:53:26.893091    8358 logs.go:123] Gathering logs for etcd [4494551802a6] ...
	I0729 10:53:26.893101    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4494551802a6"
	I0729 10:53:28.005832    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:53:28.005854    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:53:29.408906    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:53:33.007572    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:53:33.007595    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:53:34.411237    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:53:34.411559    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:53:34.443739    8358 logs.go:276] 2 containers: [14a4ff9d95a0 2afc138a6e36]
	I0729 10:53:34.443909    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:53:34.467363    8358 logs.go:276] 2 containers: [68a6d48feaae 4494551802a6]
	I0729 10:53:34.467483    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:53:34.483504    8358 logs.go:276] 1 containers: [fbff0b09b3af]
	I0729 10:53:34.483587    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:53:34.496606    8358 logs.go:276] 2 containers: [8ad4866a16b3 a9a637b09ebc]
	I0729 10:53:34.496676    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:53:34.507574    8358 logs.go:276] 1 containers: [cc82106fc9da]
	I0729 10:53:34.507641    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:53:34.518213    8358 logs.go:276] 2 containers: [eee01c406f30 81df750d149b]
	I0729 10:53:34.518276    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:53:34.528747    8358 logs.go:276] 0 containers: []
	W0729 10:53:34.528759    8358 logs.go:278] No container was found matching "kindnet"
	I0729 10:53:34.528809    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:53:34.538856    8358 logs.go:276] 2 containers: [7610cd881aeb e2e7fb6e4b2d]
	I0729 10:53:34.538876    8358 logs.go:123] Gathering logs for Docker ...
	I0729 10:53:34.538881    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:53:34.562619    8358 logs.go:123] Gathering logs for storage-provisioner [7610cd881aeb] ...
	I0729 10:53:34.562631    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7610cd881aeb"
	I0729 10:53:34.575533    8358 logs.go:123] Gathering logs for container status ...
	I0729 10:53:34.575545    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:53:34.590235    8358 logs.go:123] Gathering logs for kubelet ...
	I0729 10:53:34.590245    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:53:34.626297    8358 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:53:34.626306    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:53:34.662941    8358 logs.go:123] Gathering logs for etcd [68a6d48feaae] ...
	I0729 10:53:34.662954    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68a6d48feaae"
	I0729 10:53:34.677426    8358 logs.go:123] Gathering logs for coredns [fbff0b09b3af] ...
	I0729 10:53:34.677436    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff0b09b3af"
	I0729 10:53:34.688415    8358 logs.go:123] Gathering logs for kube-proxy [cc82106fc9da] ...
	I0729 10:53:34.688428    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc82106fc9da"
	I0729 10:53:34.701982    8358 logs.go:123] Gathering logs for kube-controller-manager [eee01c406f30] ...
	I0729 10:53:34.701993    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eee01c406f30"
	I0729 10:53:34.720065    8358 logs.go:123] Gathering logs for dmesg ...
	I0729 10:53:34.720080    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:53:34.724253    8358 logs.go:123] Gathering logs for kube-apiserver [14a4ff9d95a0] ...
	I0729 10:53:34.724260    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14a4ff9d95a0"
	I0729 10:53:34.739077    8358 logs.go:123] Gathering logs for kube-apiserver [2afc138a6e36] ...
	I0729 10:53:34.739088    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2afc138a6e36"
	I0729 10:53:34.777669    8358 logs.go:123] Gathering logs for etcd [4494551802a6] ...
	I0729 10:53:34.777680    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4494551802a6"
	I0729 10:53:34.791995    8358 logs.go:123] Gathering logs for kube-scheduler [a9a637b09ebc] ...
	I0729 10:53:34.792006    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9a637b09ebc"
	I0729 10:53:34.807335    8358 logs.go:123] Gathering logs for kube-scheduler [8ad4866a16b3] ...
	I0729 10:53:34.807346    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ad4866a16b3"
	I0729 10:53:34.819509    8358 logs.go:123] Gathering logs for kube-controller-manager [81df750d149b] ...
	I0729 10:53:34.819521    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81df750d149b"
	I0729 10:53:34.833171    8358 logs.go:123] Gathering logs for storage-provisioner [e2e7fb6e4b2d] ...
	I0729 10:53:34.833182    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2e7fb6e4b2d"
	I0729 10:53:37.346793    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:53:38.009719    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:53:38.009759    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:53:42.349072    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:53:42.349227    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:53:42.362055    8358 logs.go:276] 2 containers: [14a4ff9d95a0 2afc138a6e36]
	I0729 10:53:42.362140    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:53:42.372644    8358 logs.go:276] 2 containers: [68a6d48feaae 4494551802a6]
	I0729 10:53:42.372722    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:53:42.383090    8358 logs.go:276] 1 containers: [fbff0b09b3af]
	I0729 10:53:42.383158    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:53:42.393892    8358 logs.go:276] 2 containers: [8ad4866a16b3 a9a637b09ebc]
	I0729 10:53:42.393966    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:53:42.404305    8358 logs.go:276] 1 containers: [cc82106fc9da]
	I0729 10:53:42.404382    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:53:42.415013    8358 logs.go:276] 2 containers: [eee01c406f30 81df750d149b]
	I0729 10:53:42.415079    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:53:42.425499    8358 logs.go:276] 0 containers: []
	W0729 10:53:42.425514    8358 logs.go:278] No container was found matching "kindnet"
	I0729 10:53:42.425577    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:53:42.435781    8358 logs.go:276] 2 containers: [7610cd881aeb e2e7fb6e4b2d]
	I0729 10:53:42.435798    8358 logs.go:123] Gathering logs for kube-controller-manager [eee01c406f30] ...
	I0729 10:53:42.435804    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eee01c406f30"
	I0729 10:53:42.453070    8358 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:53:42.453083    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:53:42.488466    8358 logs.go:123] Gathering logs for etcd [68a6d48feaae] ...
	I0729 10:53:42.488480    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68a6d48feaae"
	I0729 10:53:42.502050    8358 logs.go:123] Gathering logs for etcd [4494551802a6] ...
	I0729 10:53:42.502061    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4494551802a6"
	I0729 10:53:42.516480    8358 logs.go:123] Gathering logs for Docker ...
	I0729 10:53:42.516492    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:53:42.541232    8358 logs.go:123] Gathering logs for kubelet ...
	I0729 10:53:42.541240    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:53:42.579306    8358 logs.go:123] Gathering logs for kube-apiserver [14a4ff9d95a0] ...
	I0729 10:53:42.579316    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14a4ff9d95a0"
	I0729 10:53:42.593114    8358 logs.go:123] Gathering logs for storage-provisioner [7610cd881aeb] ...
	I0729 10:53:42.593127    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7610cd881aeb"
	I0729 10:53:42.604357    8358 logs.go:123] Gathering logs for kube-controller-manager [81df750d149b] ...
	I0729 10:53:42.604365    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81df750d149b"
	I0729 10:53:42.617576    8358 logs.go:123] Gathering logs for storage-provisioner [e2e7fb6e4b2d] ...
	I0729 10:53:42.617586    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2e7fb6e4b2d"
	I0729 10:53:42.628678    8358 logs.go:123] Gathering logs for kube-scheduler [8ad4866a16b3] ...
	I0729 10:53:42.628691    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ad4866a16b3"
	I0729 10:53:42.644377    8358 logs.go:123] Gathering logs for kube-scheduler [a9a637b09ebc] ...
	I0729 10:53:42.644388    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9a637b09ebc"
	I0729 10:53:42.663603    8358 logs.go:123] Gathering logs for kube-proxy [cc82106fc9da] ...
	I0729 10:53:42.663620    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc82106fc9da"
	I0729 10:53:42.675430    8358 logs.go:123] Gathering logs for container status ...
	I0729 10:53:42.675442    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:53:42.687184    8358 logs.go:123] Gathering logs for dmesg ...
	I0729 10:53:42.687198    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:53:42.691098    8358 logs.go:123] Gathering logs for kube-apiserver [2afc138a6e36] ...
	I0729 10:53:42.691105    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2afc138a6e36"
	I0729 10:53:42.736824    8358 logs.go:123] Gathering logs for coredns [fbff0b09b3af] ...
	I0729 10:53:42.736837    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff0b09b3af"
	I0729 10:53:43.011922    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:53:43.011943    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:53:45.251479    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:53:48.014053    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:53:48.014214    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:53:48.029811    8229 logs.go:276] 1 containers: [120daa333441]
	I0729 10:53:48.029890    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:53:48.047391    8229 logs.go:276] 1 containers: [6cfb0c541a62]
	I0729 10:53:48.047466    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:53:48.061631    8229 logs.go:276] 2 containers: [f179b7a6916f 74a37cb60d42]
	I0729 10:53:48.061702    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:53:48.072774    8229 logs.go:276] 1 containers: [60be90d0d8ea]
	I0729 10:53:48.072833    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:53:48.086111    8229 logs.go:276] 1 containers: [5a4490e00797]
	I0729 10:53:48.086180    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:53:48.102686    8229 logs.go:276] 1 containers: [a047283c1326]
	I0729 10:53:48.102747    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:53:48.113366    8229 logs.go:276] 0 containers: []
	W0729 10:53:48.113378    8229 logs.go:278] No container was found matching "kindnet"
	I0729 10:53:48.113435    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:53:48.125258    8229 logs.go:276] 1 containers: [585ed2b764f6]
	I0729 10:53:48.125275    8229 logs.go:123] Gathering logs for coredns [f179b7a6916f] ...
	I0729 10:53:48.125281    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f179b7a6916f"
	I0729 10:53:48.137275    8229 logs.go:123] Gathering logs for coredns [74a37cb60d42] ...
	I0729 10:53:48.137286    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74a37cb60d42"
	I0729 10:53:48.148602    8229 logs.go:123] Gathering logs for dmesg ...
	I0729 10:53:48.148613    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:53:48.153148    8229 logs.go:123] Gathering logs for kube-apiserver [120daa333441] ...
	I0729 10:53:48.153155    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 120daa333441"
	I0729 10:53:48.166404    8229 logs.go:123] Gathering logs for etcd [6cfb0c541a62] ...
	I0729 10:53:48.166415    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cfb0c541a62"
	I0729 10:53:48.179958    8229 logs.go:123] Gathering logs for kube-scheduler [60be90d0d8ea] ...
	I0729 10:53:48.179967    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60be90d0d8ea"
	I0729 10:53:48.194433    8229 logs.go:123] Gathering logs for kube-proxy [5a4490e00797] ...
	I0729 10:53:48.194442    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a4490e00797"
	I0729 10:53:48.208709    8229 logs.go:123] Gathering logs for kube-controller-manager [a047283c1326] ...
	I0729 10:53:48.208720    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a047283c1326"
	I0729 10:53:48.225797    8229 logs.go:123] Gathering logs for storage-provisioner [585ed2b764f6] ...
	I0729 10:53:48.225808    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585ed2b764f6"
	I0729 10:53:48.237746    8229 logs.go:123] Gathering logs for Docker ...
	I0729 10:53:48.237757    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:53:48.262584    8229 logs.go:123] Gathering logs for kubelet ...
	I0729 10:53:48.262593    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:53:48.299118    8229 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:53:48.299126    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:53:48.341110    8229 logs.go:123] Gathering logs for container status ...
	I0729 10:53:48.341122    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:53:50.253720    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:53:50.253865    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:53:50.273432    8358 logs.go:276] 2 containers: [14a4ff9d95a0 2afc138a6e36]
	I0729 10:53:50.273520    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:53:50.288569    8358 logs.go:276] 2 containers: [68a6d48feaae 4494551802a6]
	I0729 10:53:50.288650    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:53:50.300785    8358 logs.go:276] 1 containers: [fbff0b09b3af]
	I0729 10:53:50.300854    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:53:50.311531    8358 logs.go:276] 2 containers: [8ad4866a16b3 a9a637b09ebc]
	I0729 10:53:50.311608    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:53:50.322105    8358 logs.go:276] 1 containers: [cc82106fc9da]
	I0729 10:53:50.322171    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:53:50.333021    8358 logs.go:276] 2 containers: [eee01c406f30 81df750d149b]
	I0729 10:53:50.333087    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:53:50.342964    8358 logs.go:276] 0 containers: []
	W0729 10:53:50.342974    8358 logs.go:278] No container was found matching "kindnet"
	I0729 10:53:50.343030    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:53:50.353024    8358 logs.go:276] 2 containers: [7610cd881aeb e2e7fb6e4b2d]
	I0729 10:53:50.353045    8358 logs.go:123] Gathering logs for kubelet ...
	I0729 10:53:50.353050    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:53:50.390343    8358 logs.go:123] Gathering logs for dmesg ...
	I0729 10:53:50.390351    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:53:50.394546    8358 logs.go:123] Gathering logs for coredns [fbff0b09b3af] ...
	I0729 10:53:50.394555    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff0b09b3af"
	I0729 10:53:50.405673    8358 logs.go:123] Gathering logs for kube-scheduler [a9a637b09ebc] ...
	I0729 10:53:50.405684    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9a637b09ebc"
	I0729 10:53:50.421596    8358 logs.go:123] Gathering logs for kube-controller-manager [81df750d149b] ...
	I0729 10:53:50.421605    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81df750d149b"
	I0729 10:53:50.434903    8358 logs.go:123] Gathering logs for kube-proxy [cc82106fc9da] ...
	I0729 10:53:50.434914    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc82106fc9da"
	I0729 10:53:50.448047    8358 logs.go:123] Gathering logs for storage-provisioner [7610cd881aeb] ...
	I0729 10:53:50.448057    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7610cd881aeb"
	I0729 10:53:50.459308    8358 logs.go:123] Gathering logs for Docker ...
	I0729 10:53:50.459321    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:53:50.482659    8358 logs.go:123] Gathering logs for container status ...
	I0729 10:53:50.482671    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:53:50.493896    8358 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:53:50.493909    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:53:50.527760    8358 logs.go:123] Gathering logs for kube-apiserver [2afc138a6e36] ...
	I0729 10:53:50.527773    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2afc138a6e36"
	I0729 10:53:50.568178    8358 logs.go:123] Gathering logs for etcd [68a6d48feaae] ...
	I0729 10:53:50.568191    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68a6d48feaae"
	I0729 10:53:50.584961    8358 logs.go:123] Gathering logs for kube-scheduler [8ad4866a16b3] ...
	I0729 10:53:50.584974    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ad4866a16b3"
	I0729 10:53:50.596422    8358 logs.go:123] Gathering logs for kube-controller-manager [eee01c406f30] ...
	I0729 10:53:50.596437    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eee01c406f30"
	I0729 10:53:50.615222    8358 logs.go:123] Gathering logs for storage-provisioner [e2e7fb6e4b2d] ...
	I0729 10:53:50.615234    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2e7fb6e4b2d"
	I0729 10:53:50.626678    8358 logs.go:123] Gathering logs for kube-apiserver [14a4ff9d95a0] ...
	I0729 10:53:50.626689    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14a4ff9d95a0"
	I0729 10:53:50.641072    8358 logs.go:123] Gathering logs for etcd [4494551802a6] ...
	I0729 10:53:50.641082    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4494551802a6"
	I0729 10:53:50.854355    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:53:53.157252    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:53:55.856493    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:53:55.856593    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:53:55.871509    8229 logs.go:276] 1 containers: [120daa333441]
	I0729 10:53:55.871590    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:53:55.883168    8229 logs.go:276] 1 containers: [6cfb0c541a62]
	I0729 10:53:55.883238    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:53:55.894282    8229 logs.go:276] 2 containers: [f179b7a6916f 74a37cb60d42]
	I0729 10:53:55.894354    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:53:55.904638    8229 logs.go:276] 1 containers: [60be90d0d8ea]
	I0729 10:53:55.904709    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:53:55.916079    8229 logs.go:276] 1 containers: [5a4490e00797]
	I0729 10:53:55.916143    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:53:55.926575    8229 logs.go:276] 1 containers: [a047283c1326]
	I0729 10:53:55.926639    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:53:55.936748    8229 logs.go:276] 0 containers: []
	W0729 10:53:55.936759    8229 logs.go:278] No container was found matching "kindnet"
	I0729 10:53:55.936812    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:53:55.953121    8229 logs.go:276] 1 containers: [585ed2b764f6]
	I0729 10:53:55.953138    8229 logs.go:123] Gathering logs for etcd [6cfb0c541a62] ...
	I0729 10:53:55.953144    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cfb0c541a62"
	I0729 10:53:55.966829    8229 logs.go:123] Gathering logs for coredns [f179b7a6916f] ...
	I0729 10:53:55.966842    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f179b7a6916f"
	I0729 10:53:55.980737    8229 logs.go:123] Gathering logs for kube-scheduler [60be90d0d8ea] ...
	I0729 10:53:55.980747    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60be90d0d8ea"
	I0729 10:53:55.995690    8229 logs.go:123] Gathering logs for kube-controller-manager [a047283c1326] ...
	I0729 10:53:55.995701    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a047283c1326"
	I0729 10:53:56.014959    8229 logs.go:123] Gathering logs for Docker ...
	I0729 10:53:56.014968    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:53:56.038803    8229 logs.go:123] Gathering logs for dmesg ...
	I0729 10:53:56.038811    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:53:56.043222    8229 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:53:56.043231    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:53:56.079179    8229 logs.go:123] Gathering logs for coredns [74a37cb60d42] ...
	I0729 10:53:56.079191    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74a37cb60d42"
	I0729 10:53:56.091070    8229 logs.go:123] Gathering logs for kube-proxy [5a4490e00797] ...
	I0729 10:53:56.091081    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a4490e00797"
	I0729 10:53:56.111021    8229 logs.go:123] Gathering logs for storage-provisioner [585ed2b764f6] ...
	I0729 10:53:56.111032    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585ed2b764f6"
	I0729 10:53:56.122819    8229 logs.go:123] Gathering logs for container status ...
	I0729 10:53:56.122829    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:53:56.134627    8229 logs.go:123] Gathering logs for kubelet ...
	I0729 10:53:56.134642    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:53:56.169948    8229 logs.go:123] Gathering logs for kube-apiserver [120daa333441] ...
	I0729 10:53:56.169959    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 120daa333441"
	I0729 10:53:58.686896    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:53:58.159489    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:53:58.159731    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:53:58.184445    8358 logs.go:276] 2 containers: [14a4ff9d95a0 2afc138a6e36]
	I0729 10:53:58.184547    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:53:58.208027    8358 logs.go:276] 2 containers: [68a6d48feaae 4494551802a6]
	I0729 10:53:58.208097    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:53:58.221194    8358 logs.go:276] 1 containers: [fbff0b09b3af]
	I0729 10:53:58.221274    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:53:58.231744    8358 logs.go:276] 2 containers: [8ad4866a16b3 a9a637b09ebc]
	I0729 10:53:58.231818    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:53:58.242430    8358 logs.go:276] 1 containers: [cc82106fc9da]
	I0729 10:53:58.242506    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:53:58.253392    8358 logs.go:276] 2 containers: [eee01c406f30 81df750d149b]
	I0729 10:53:58.253462    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:53:58.263770    8358 logs.go:276] 0 containers: []
	W0729 10:53:58.263782    8358 logs.go:278] No container was found matching "kindnet"
	I0729 10:53:58.263843    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:53:58.275377    8358 logs.go:276] 2 containers: [7610cd881aeb e2e7fb6e4b2d]
	I0729 10:53:58.275398    8358 logs.go:123] Gathering logs for kube-apiserver [14a4ff9d95a0] ...
	I0729 10:53:58.275404    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14a4ff9d95a0"
	I0729 10:53:58.289045    8358 logs.go:123] Gathering logs for kube-apiserver [2afc138a6e36] ...
	I0729 10:53:58.289056    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2afc138a6e36"
	I0729 10:53:58.327463    8358 logs.go:123] Gathering logs for etcd [4494551802a6] ...
	I0729 10:53:58.327475    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4494551802a6"
	I0729 10:53:58.341531    8358 logs.go:123] Gathering logs for kube-scheduler [8ad4866a16b3] ...
	I0729 10:53:58.341542    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ad4866a16b3"
	I0729 10:53:58.352876    8358 logs.go:123] Gathering logs for kube-scheduler [a9a637b09ebc] ...
	I0729 10:53:58.352887    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9a637b09ebc"
	I0729 10:53:58.368880    8358 logs.go:123] Gathering logs for kube-proxy [cc82106fc9da] ...
	I0729 10:53:58.368891    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc82106fc9da"
	I0729 10:53:58.380375    8358 logs.go:123] Gathering logs for etcd [68a6d48feaae] ...
	I0729 10:53:58.380386    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68a6d48feaae"
	I0729 10:53:58.394461    8358 logs.go:123] Gathering logs for coredns [fbff0b09b3af] ...
	I0729 10:53:58.394472    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff0b09b3af"
	I0729 10:53:58.405430    8358 logs.go:123] Gathering logs for kube-controller-manager [eee01c406f30] ...
	I0729 10:53:58.405464    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eee01c406f30"
	I0729 10:53:58.422933    8358 logs.go:123] Gathering logs for Docker ...
	I0729 10:53:58.422945    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:53:58.448090    8358 logs.go:123] Gathering logs for kubelet ...
	I0729 10:53:58.448100    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:53:58.486107    8358 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:53:58.486122    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:53:58.522033    8358 logs.go:123] Gathering logs for container status ...
	I0729 10:53:58.522044    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:53:58.533857    8358 logs.go:123] Gathering logs for dmesg ...
	I0729 10:53:58.533869    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:53:58.538419    8358 logs.go:123] Gathering logs for kube-controller-manager [81df750d149b] ...
	I0729 10:53:58.538425    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81df750d149b"
	I0729 10:53:58.551573    8358 logs.go:123] Gathering logs for storage-provisioner [7610cd881aeb] ...
	I0729 10:53:58.551584    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7610cd881aeb"
	I0729 10:53:58.565484    8358 logs.go:123] Gathering logs for storage-provisioner [e2e7fb6e4b2d] ...
	I0729 10:53:58.565496    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2e7fb6e4b2d"
	I0729 10:54:01.078822    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:54:03.689143    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:54:03.689406    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:54:03.722259    8229 logs.go:276] 1 containers: [120daa333441]
	I0729 10:54:03.722361    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:54:03.737097    8229 logs.go:276] 1 containers: [6cfb0c541a62]
	I0729 10:54:03.737175    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:54:03.750139    8229 logs.go:276] 2 containers: [f179b7a6916f 74a37cb60d42]
	I0729 10:54:03.750211    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:54:03.761060    8229 logs.go:276] 1 containers: [60be90d0d8ea]
	I0729 10:54:03.761134    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:54:03.771312    8229 logs.go:276] 1 containers: [5a4490e00797]
	I0729 10:54:03.771384    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:54:03.781389    8229 logs.go:276] 1 containers: [a047283c1326]
	I0729 10:54:03.781455    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:54:03.791739    8229 logs.go:276] 0 containers: []
	W0729 10:54:03.791750    8229 logs.go:278] No container was found matching "kindnet"
	I0729 10:54:03.791807    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:54:03.802502    8229 logs.go:276] 1 containers: [585ed2b764f6]
	I0729 10:54:03.802516    8229 logs.go:123] Gathering logs for container status ...
	I0729 10:54:03.802522    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:54:03.814697    8229 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:54:03.814711    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:54:03.851210    8229 logs.go:123] Gathering logs for coredns [f179b7a6916f] ...
	I0729 10:54:03.851225    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f179b7a6916f"
	I0729 10:54:03.868083    8229 logs.go:123] Gathering logs for coredns [74a37cb60d42] ...
	I0729 10:54:03.868094    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74a37cb60d42"
	I0729 10:54:03.880638    8229 logs.go:123] Gathering logs for kube-scheduler [60be90d0d8ea] ...
	I0729 10:54:03.880652    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60be90d0d8ea"
	I0729 10:54:03.895132    8229 logs.go:123] Gathering logs for kube-proxy [5a4490e00797] ...
	I0729 10:54:03.895145    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a4490e00797"
	I0729 10:54:03.906717    8229 logs.go:123] Gathering logs for kube-controller-manager [a047283c1326] ...
	I0729 10:54:03.906730    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a047283c1326"
	I0729 10:54:03.924247    8229 logs.go:123] Gathering logs for storage-provisioner [585ed2b764f6] ...
	I0729 10:54:03.924261    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585ed2b764f6"
	I0729 10:54:03.935617    8229 logs.go:123] Gathering logs for Docker ...
	I0729 10:54:03.935626    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:54:03.959404    8229 logs.go:123] Gathering logs for kubelet ...
	I0729 10:54:03.959416    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:54:03.995679    8229 logs.go:123] Gathering logs for dmesg ...
	I0729 10:54:03.995687    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:54:04.000390    8229 logs.go:123] Gathering logs for kube-apiserver [120daa333441] ...
	I0729 10:54:04.000397    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 120daa333441"
	I0729 10:54:04.014698    8229 logs.go:123] Gathering logs for etcd [6cfb0c541a62] ...
	I0729 10:54:04.014710    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cfb0c541a62"
	I0729 10:54:06.081575    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:54:06.081932    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:54:06.119479    8358 logs.go:276] 2 containers: [14a4ff9d95a0 2afc138a6e36]
	I0729 10:54:06.119586    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:54:06.136539    8358 logs.go:276] 2 containers: [68a6d48feaae 4494551802a6]
	I0729 10:54:06.136620    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:54:06.150193    8358 logs.go:276] 1 containers: [fbff0b09b3af]
	I0729 10:54:06.150272    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:54:06.163488    8358 logs.go:276] 2 containers: [8ad4866a16b3 a9a637b09ebc]
	I0729 10:54:06.163562    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:54:06.178741    8358 logs.go:276] 1 containers: [cc82106fc9da]
	I0729 10:54:06.178812    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:54:06.191934    8358 logs.go:276] 2 containers: [eee01c406f30 81df750d149b]
	I0729 10:54:06.192010    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:54:06.202718    8358 logs.go:276] 0 containers: []
	W0729 10:54:06.202731    8358 logs.go:278] No container was found matching "kindnet"
	I0729 10:54:06.202790    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:54:06.214689    8358 logs.go:276] 2 containers: [7610cd881aeb e2e7fb6e4b2d]
	I0729 10:54:06.214706    8358 logs.go:123] Gathering logs for container status ...
	I0729 10:54:06.214712    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:54:06.227133    8358 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:54:06.227145    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:54:06.268890    8358 logs.go:123] Gathering logs for kube-apiserver [14a4ff9d95a0] ...
	I0729 10:54:06.268902    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14a4ff9d95a0"
	I0729 10:54:06.284157    8358 logs.go:123] Gathering logs for kube-apiserver [2afc138a6e36] ...
	I0729 10:54:06.284169    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2afc138a6e36"
	I0729 10:54:06.324841    8358 logs.go:123] Gathering logs for coredns [fbff0b09b3af] ...
	I0729 10:54:06.324863    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff0b09b3af"
	I0729 10:54:06.340258    8358 logs.go:123] Gathering logs for kube-scheduler [8ad4866a16b3] ...
	I0729 10:54:06.340270    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ad4866a16b3"
	I0729 10:54:06.352572    8358 logs.go:123] Gathering logs for storage-provisioner [e2e7fb6e4b2d] ...
	I0729 10:54:06.352582    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2e7fb6e4b2d"
	I0729 10:54:06.363811    8358 logs.go:123] Gathering logs for etcd [68a6d48feaae] ...
	I0729 10:54:06.363823    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68a6d48feaae"
	I0729 10:54:06.382604    8358 logs.go:123] Gathering logs for kube-scheduler [a9a637b09ebc] ...
	I0729 10:54:06.382615    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9a637b09ebc"
	I0729 10:54:06.398296    8358 logs.go:123] Gathering logs for Docker ...
	I0729 10:54:06.398308    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:54:06.421284    8358 logs.go:123] Gathering logs for kubelet ...
	I0729 10:54:06.421292    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:54:06.456445    8358 logs.go:123] Gathering logs for dmesg ...
	I0729 10:54:06.456452    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:54:06.460816    8358 logs.go:123] Gathering logs for kube-proxy [cc82106fc9da] ...
	I0729 10:54:06.460824    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc82106fc9da"
	I0729 10:54:06.472253    8358 logs.go:123] Gathering logs for kube-controller-manager [eee01c406f30] ...
	I0729 10:54:06.472265    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eee01c406f30"
	I0729 10:54:06.491804    8358 logs.go:123] Gathering logs for storage-provisioner [7610cd881aeb] ...
	I0729 10:54:06.491815    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7610cd881aeb"
	I0729 10:54:06.503334    8358 logs.go:123] Gathering logs for etcd [4494551802a6] ...
	I0729 10:54:06.503344    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4494551802a6"
	I0729 10:54:06.518767    8358 logs.go:123] Gathering logs for kube-controller-manager [81df750d149b] ...
	I0729 10:54:06.518779    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81df750d149b"
	I0729 10:54:06.530091    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:54:09.039917    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:54:11.532336    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:54:11.532686    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:54:11.573033    8229 logs.go:276] 1 containers: [120daa333441]
	I0729 10:54:11.573176    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:54:11.593716    8229 logs.go:276] 1 containers: [6cfb0c541a62]
	I0729 10:54:11.593806    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:54:11.609962    8229 logs.go:276] 2 containers: [f179b7a6916f 74a37cb60d42]
	I0729 10:54:11.610034    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:54:11.622161    8229 logs.go:276] 1 containers: [60be90d0d8ea]
	I0729 10:54:11.622235    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:54:11.633350    8229 logs.go:276] 1 containers: [5a4490e00797]
	I0729 10:54:11.633427    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:54:11.644026    8229 logs.go:276] 1 containers: [a047283c1326]
	I0729 10:54:11.644086    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:54:11.654214    8229 logs.go:276] 0 containers: []
	W0729 10:54:11.654224    8229 logs.go:278] No container was found matching "kindnet"
	I0729 10:54:11.654283    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:54:11.664437    8229 logs.go:276] 1 containers: [585ed2b764f6]
	I0729 10:54:11.664453    8229 logs.go:123] Gathering logs for dmesg ...
	I0729 10:54:11.664458    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:54:11.668936    8229 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:54:11.668943    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:54:11.702916    8229 logs.go:123] Gathering logs for coredns [f179b7a6916f] ...
	I0729 10:54:11.702930    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f179b7a6916f"
	I0729 10:54:11.715274    8229 logs.go:123] Gathering logs for kube-controller-manager [a047283c1326] ...
	I0729 10:54:11.715285    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a047283c1326"
	I0729 10:54:11.738260    8229 logs.go:123] Gathering logs for storage-provisioner [585ed2b764f6] ...
	I0729 10:54:11.738274    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585ed2b764f6"
	I0729 10:54:11.750006    8229 logs.go:123] Gathering logs for container status ...
	I0729 10:54:11.750020    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:54:11.761899    8229 logs.go:123] Gathering logs for kubelet ...
	I0729 10:54:11.761911    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:54:11.797325    8229 logs.go:123] Gathering logs for kube-apiserver [120daa333441] ...
	I0729 10:54:11.797337    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 120daa333441"
	I0729 10:54:11.811925    8229 logs.go:123] Gathering logs for etcd [6cfb0c541a62] ...
	I0729 10:54:11.811933    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cfb0c541a62"
	I0729 10:54:11.830711    8229 logs.go:123] Gathering logs for coredns [74a37cb60d42] ...
	I0729 10:54:11.830721    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74a37cb60d42"
	I0729 10:54:11.842667    8229 logs.go:123] Gathering logs for kube-scheduler [60be90d0d8ea] ...
	I0729 10:54:11.842677    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60be90d0d8ea"
	I0729 10:54:11.856602    8229 logs.go:123] Gathering logs for kube-proxy [5a4490e00797] ...
	I0729 10:54:11.856612    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a4490e00797"
	I0729 10:54:11.868560    8229 logs.go:123] Gathering logs for Docker ...
	I0729 10:54:11.868571    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:54:14.396307    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:54:14.042630    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:54:14.042847    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:54:14.062343    8358 logs.go:276] 2 containers: [14a4ff9d95a0 2afc138a6e36]
	I0729 10:54:14.062437    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:54:14.076256    8358 logs.go:276] 2 containers: [68a6d48feaae 4494551802a6]
	I0729 10:54:14.076344    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:54:14.088205    8358 logs.go:276] 1 containers: [fbff0b09b3af]
	I0729 10:54:14.088278    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:54:14.101287    8358 logs.go:276] 2 containers: [8ad4866a16b3 a9a637b09ebc]
	I0729 10:54:14.101362    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:54:14.111975    8358 logs.go:276] 1 containers: [cc82106fc9da]
	I0729 10:54:14.112041    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:54:14.122231    8358 logs.go:276] 2 containers: [eee01c406f30 81df750d149b]
	I0729 10:54:14.122328    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:54:14.132751    8358 logs.go:276] 0 containers: []
	W0729 10:54:14.132762    8358 logs.go:278] No container was found matching "kindnet"
	I0729 10:54:14.132817    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:54:14.143234    8358 logs.go:276] 2 containers: [7610cd881aeb e2e7fb6e4b2d]
	I0729 10:54:14.143250    8358 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:54:14.143257    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:54:14.182875    8358 logs.go:123] Gathering logs for kube-apiserver [14a4ff9d95a0] ...
	I0729 10:54:14.182889    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14a4ff9d95a0"
	I0729 10:54:14.196565    8358 logs.go:123] Gathering logs for kube-scheduler [a9a637b09ebc] ...
	I0729 10:54:14.196574    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9a637b09ebc"
	I0729 10:54:14.212617    8358 logs.go:123] Gathering logs for kube-controller-manager [81df750d149b] ...
	I0729 10:54:14.212631    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81df750d149b"
	I0729 10:54:14.227091    8358 logs.go:123] Gathering logs for dmesg ...
	I0729 10:54:14.227105    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:54:14.231454    8358 logs.go:123] Gathering logs for coredns [fbff0b09b3af] ...
	I0729 10:54:14.231460    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff0b09b3af"
	I0729 10:54:14.244585    8358 logs.go:123] Gathering logs for storage-provisioner [7610cd881aeb] ...
	I0729 10:54:14.244597    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7610cd881aeb"
	I0729 10:54:14.256821    8358 logs.go:123] Gathering logs for storage-provisioner [e2e7fb6e4b2d] ...
	I0729 10:54:14.256831    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2e7fb6e4b2d"
	I0729 10:54:14.268343    8358 logs.go:123] Gathering logs for Docker ...
	I0729 10:54:14.268357    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:54:14.292774    8358 logs.go:123] Gathering logs for kube-apiserver [2afc138a6e36] ...
	I0729 10:54:14.292787    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2afc138a6e36"
	I0729 10:54:14.331979    8358 logs.go:123] Gathering logs for kube-proxy [cc82106fc9da] ...
	I0729 10:54:14.331990    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc82106fc9da"
	I0729 10:54:14.343362    8358 logs.go:123] Gathering logs for container status ...
	I0729 10:54:14.343374    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:54:14.356383    8358 logs.go:123] Gathering logs for kubelet ...
	I0729 10:54:14.356395    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:54:14.392811    8358 logs.go:123] Gathering logs for etcd [4494551802a6] ...
	I0729 10:54:14.392822    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4494551802a6"
	I0729 10:54:14.410544    8358 logs.go:123] Gathering logs for kube-scheduler [8ad4866a16b3] ...
	I0729 10:54:14.410554    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ad4866a16b3"
	I0729 10:54:14.422146    8358 logs.go:123] Gathering logs for kube-controller-manager [eee01c406f30] ...
	I0729 10:54:14.422156    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eee01c406f30"
	I0729 10:54:14.439328    8358 logs.go:123] Gathering logs for etcd [68a6d48feaae] ...
	I0729 10:54:14.439340    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68a6d48feaae"
	I0729 10:54:16.957253    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:54:19.398492    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:54:19.398813    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:54:19.430555    8229 logs.go:276] 1 containers: [120daa333441]
	I0729 10:54:19.430678    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:54:19.448231    8229 logs.go:276] 1 containers: [6cfb0c541a62]
	I0729 10:54:19.448318    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:54:19.462319    8229 logs.go:276] 2 containers: [f179b7a6916f 74a37cb60d42]
	I0729 10:54:19.462388    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:54:19.475046    8229 logs.go:276] 1 containers: [60be90d0d8ea]
	I0729 10:54:19.475111    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:54:19.486017    8229 logs.go:276] 1 containers: [5a4490e00797]
	I0729 10:54:19.486090    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:54:19.502622    8229 logs.go:276] 1 containers: [a047283c1326]
	I0729 10:54:19.502701    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:54:19.513428    8229 logs.go:276] 0 containers: []
	W0729 10:54:19.513444    8229 logs.go:278] No container was found matching "kindnet"
	I0729 10:54:19.513498    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:54:19.526165    8229 logs.go:276] 1 containers: [585ed2b764f6]
	I0729 10:54:19.526188    8229 logs.go:123] Gathering logs for coredns [f179b7a6916f] ...
	I0729 10:54:19.526193    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f179b7a6916f"
	I0729 10:54:19.538216    8229 logs.go:123] Gathering logs for kube-scheduler [60be90d0d8ea] ...
	I0729 10:54:19.538229    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60be90d0d8ea"
	I0729 10:54:19.552441    8229 logs.go:123] Gathering logs for kube-proxy [5a4490e00797] ...
	I0729 10:54:19.552453    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a4490e00797"
	I0729 10:54:19.565192    8229 logs.go:123] Gathering logs for storage-provisioner [585ed2b764f6] ...
	I0729 10:54:19.565205    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585ed2b764f6"
	I0729 10:54:19.576993    8229 logs.go:123] Gathering logs for Docker ...
	I0729 10:54:19.577007    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:54:19.602023    8229 logs.go:123] Gathering logs for kubelet ...
	I0729 10:54:19.602035    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:54:19.640170    8229 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:54:19.640185    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:54:19.675266    8229 logs.go:123] Gathering logs for kube-apiserver [120daa333441] ...
	I0729 10:54:19.675282    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 120daa333441"
	I0729 10:54:19.689500    8229 logs.go:123] Gathering logs for kube-controller-manager [a047283c1326] ...
	I0729 10:54:19.689514    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a047283c1326"
	I0729 10:54:19.707066    8229 logs.go:123] Gathering logs for container status ...
	I0729 10:54:19.707084    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:54:19.721067    8229 logs.go:123] Gathering logs for dmesg ...
	I0729 10:54:19.721078    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:54:19.725679    8229 logs.go:123] Gathering logs for etcd [6cfb0c541a62] ...
	I0729 10:54:19.725688    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cfb0c541a62"
	I0729 10:54:19.739972    8229 logs.go:123] Gathering logs for coredns [74a37cb60d42] ...
	I0729 10:54:19.739986    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74a37cb60d42"
	I0729 10:54:21.959851    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:54:21.960045    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:54:21.977496    8358 logs.go:276] 2 containers: [14a4ff9d95a0 2afc138a6e36]
	I0729 10:54:21.977586    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:54:21.991156    8358 logs.go:276] 2 containers: [68a6d48feaae 4494551802a6]
	I0729 10:54:21.991234    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:54:22.002245    8358 logs.go:276] 1 containers: [fbff0b09b3af]
	I0729 10:54:22.002316    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:54:22.012937    8358 logs.go:276] 2 containers: [8ad4866a16b3 a9a637b09ebc]
	I0729 10:54:22.013006    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:54:22.023738    8358 logs.go:276] 1 containers: [cc82106fc9da]
	I0729 10:54:22.023807    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:54:22.035992    8358 logs.go:276] 2 containers: [eee01c406f30 81df750d149b]
	I0729 10:54:22.036069    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:54:22.046081    8358 logs.go:276] 0 containers: []
	W0729 10:54:22.046096    8358 logs.go:278] No container was found matching "kindnet"
	I0729 10:54:22.046154    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:54:22.056808    8358 logs.go:276] 2 containers: [7610cd881aeb e2e7fb6e4b2d]
	I0729 10:54:22.056828    8358 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:54:22.056834    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:54:22.091090    8358 logs.go:123] Gathering logs for kube-apiserver [14a4ff9d95a0] ...
	I0729 10:54:22.091102    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14a4ff9d95a0"
	I0729 10:54:22.105464    8358 logs.go:123] Gathering logs for kube-apiserver [2afc138a6e36] ...
	I0729 10:54:22.105477    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2afc138a6e36"
	I0729 10:54:22.143894    8358 logs.go:123] Gathering logs for coredns [fbff0b09b3af] ...
	I0729 10:54:22.143906    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff0b09b3af"
	I0729 10:54:22.155282    8358 logs.go:123] Gathering logs for kube-controller-manager [81df750d149b] ...
	I0729 10:54:22.155294    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81df750d149b"
	I0729 10:54:22.167983    8358 logs.go:123] Gathering logs for kubelet ...
	I0729 10:54:22.167997    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:54:22.206030    8358 logs.go:123] Gathering logs for etcd [4494551802a6] ...
	I0729 10:54:22.206038    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4494551802a6"
	I0729 10:54:22.220374    8358 logs.go:123] Gathering logs for kube-scheduler [a9a637b09ebc] ...
	I0729 10:54:22.220385    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9a637b09ebc"
	I0729 10:54:22.242872    8358 logs.go:123] Gathering logs for storage-provisioner [7610cd881aeb] ...
	I0729 10:54:22.242883    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7610cd881aeb"
	I0729 10:54:22.254601    8358 logs.go:123] Gathering logs for Docker ...
	I0729 10:54:22.254612    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:54:22.279446    8358 logs.go:123] Gathering logs for container status ...
	I0729 10:54:22.279454    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:54:22.290986    8358 logs.go:123] Gathering logs for dmesg ...
	I0729 10:54:22.290999    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:54:22.295070    8358 logs.go:123] Gathering logs for etcd [68a6d48feaae] ...
	I0729 10:54:22.295077    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68a6d48feaae"
	I0729 10:54:22.308940    8358 logs.go:123] Gathering logs for kube-scheduler [8ad4866a16b3] ...
	I0729 10:54:22.308949    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ad4866a16b3"
	I0729 10:54:22.321448    8358 logs.go:123] Gathering logs for kube-proxy [cc82106fc9da] ...
	I0729 10:54:22.321461    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc82106fc9da"
	I0729 10:54:22.345239    8358 logs.go:123] Gathering logs for kube-controller-manager [eee01c406f30] ...
	I0729 10:54:22.345252    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eee01c406f30"
	I0729 10:54:22.369943    8358 logs.go:123] Gathering logs for storage-provisioner [e2e7fb6e4b2d] ...
	I0729 10:54:22.369958    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2e7fb6e4b2d"
	I0729 10:54:22.254880    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:54:24.883582    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:54:27.257018    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:54:27.257280    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:54:27.284493    8229 logs.go:276] 1 containers: [120daa333441]
	I0729 10:54:27.284620    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:54:27.301881    8229 logs.go:276] 1 containers: [6cfb0c541a62]
	I0729 10:54:27.301956    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:54:27.315368    8229 logs.go:276] 2 containers: [f179b7a6916f 74a37cb60d42]
	I0729 10:54:27.315446    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:54:27.327098    8229 logs.go:276] 1 containers: [60be90d0d8ea]
	I0729 10:54:27.327163    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:54:27.337805    8229 logs.go:276] 1 containers: [5a4490e00797]
	I0729 10:54:27.337875    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:54:27.348307    8229 logs.go:276] 1 containers: [a047283c1326]
	I0729 10:54:27.348373    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:54:27.361083    8229 logs.go:276] 0 containers: []
	W0729 10:54:27.361094    8229 logs.go:278] No container was found matching "kindnet"
	I0729 10:54:27.361153    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:54:27.373810    8229 logs.go:276] 1 containers: [585ed2b764f6]
	I0729 10:54:27.373825    8229 logs.go:123] Gathering logs for kubelet ...
	I0729 10:54:27.373831    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:54:27.412457    8229 logs.go:123] Gathering logs for dmesg ...
	I0729 10:54:27.412471    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:54:27.417099    8229 logs.go:123] Gathering logs for coredns [74a37cb60d42] ...
	I0729 10:54:27.417106    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74a37cb60d42"
	I0729 10:54:27.429085    8229 logs.go:123] Gathering logs for kube-scheduler [60be90d0d8ea] ...
	I0729 10:54:27.429096    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60be90d0d8ea"
	I0729 10:54:27.444665    8229 logs.go:123] Gathering logs for storage-provisioner [585ed2b764f6] ...
	I0729 10:54:27.444678    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585ed2b764f6"
	I0729 10:54:27.456284    8229 logs.go:123] Gathering logs for kube-controller-manager [a047283c1326] ...
	I0729 10:54:27.456296    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a047283c1326"
	I0729 10:54:27.473292    8229 logs.go:123] Gathering logs for Docker ...
	I0729 10:54:27.473301    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:54:27.496682    8229 logs.go:123] Gathering logs for container status ...
	I0729 10:54:27.496689    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:54:27.507998    8229 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:54:27.508008    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:54:27.545552    8229 logs.go:123] Gathering logs for kube-apiserver [120daa333441] ...
	I0729 10:54:27.545562    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 120daa333441"
	I0729 10:54:27.568158    8229 logs.go:123] Gathering logs for etcd [6cfb0c541a62] ...
	I0729 10:54:27.568167    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cfb0c541a62"
	I0729 10:54:27.582739    8229 logs.go:123] Gathering logs for coredns [f179b7a6916f] ...
	I0729 10:54:27.582753    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f179b7a6916f"
	I0729 10:54:27.594653    8229 logs.go:123] Gathering logs for kube-proxy [5a4490e00797] ...
	I0729 10:54:27.594664    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a4490e00797"
	I0729 10:54:30.108211    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:54:29.885988    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:54:29.886224    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:54:29.909410    8358 logs.go:276] 2 containers: [14a4ff9d95a0 2afc138a6e36]
	I0729 10:54:29.909497    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:54:29.921278    8358 logs.go:276] 2 containers: [68a6d48feaae 4494551802a6]
	I0729 10:54:29.921349    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:54:29.932621    8358 logs.go:276] 1 containers: [fbff0b09b3af]
	I0729 10:54:29.932696    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:54:29.943381    8358 logs.go:276] 2 containers: [8ad4866a16b3 a9a637b09ebc]
	I0729 10:54:29.943453    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:54:29.957512    8358 logs.go:276] 1 containers: [cc82106fc9da]
	I0729 10:54:29.957575    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:54:29.967902    8358 logs.go:276] 2 containers: [eee01c406f30 81df750d149b]
	I0729 10:54:29.967977    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:54:29.978308    8358 logs.go:276] 0 containers: []
	W0729 10:54:29.978320    8358 logs.go:278] No container was found matching "kindnet"
	I0729 10:54:29.978379    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:54:29.989448    8358 logs.go:276] 2 containers: [7610cd881aeb e2e7fb6e4b2d]
	I0729 10:54:29.989466    8358 logs.go:123] Gathering logs for Docker ...
	I0729 10:54:29.989473    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:54:30.013073    8358 logs.go:123] Gathering logs for container status ...
	I0729 10:54:30.013082    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:54:30.024677    8358 logs.go:123] Gathering logs for kubelet ...
	I0729 10:54:30.024687    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:54:30.062006    8358 logs.go:123] Gathering logs for kube-apiserver [2afc138a6e36] ...
	I0729 10:54:30.062017    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2afc138a6e36"
	I0729 10:54:30.101106    8358 logs.go:123] Gathering logs for kube-scheduler [a9a637b09ebc] ...
	I0729 10:54:30.101120    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9a637b09ebc"
	I0729 10:54:30.117688    8358 logs.go:123] Gathering logs for kube-apiserver [14a4ff9d95a0] ...
	I0729 10:54:30.117700    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14a4ff9d95a0"
	I0729 10:54:30.131827    8358 logs.go:123] Gathering logs for kube-scheduler [8ad4866a16b3] ...
	I0729 10:54:30.131838    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ad4866a16b3"
	I0729 10:54:30.147139    8358 logs.go:123] Gathering logs for kube-controller-manager [81df750d149b] ...
	I0729 10:54:30.147149    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81df750d149b"
	I0729 10:54:30.160801    8358 logs.go:123] Gathering logs for kube-proxy [cc82106fc9da] ...
	I0729 10:54:30.160813    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc82106fc9da"
	I0729 10:54:30.172402    8358 logs.go:123] Gathering logs for kube-controller-manager [eee01c406f30] ...
	I0729 10:54:30.172413    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eee01c406f30"
	I0729 10:54:30.189431    8358 logs.go:123] Gathering logs for storage-provisioner [e2e7fb6e4b2d] ...
	I0729 10:54:30.189443    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2e7fb6e4b2d"
	I0729 10:54:30.201665    8358 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:54:30.201676    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:54:30.238151    8358 logs.go:123] Gathering logs for etcd [4494551802a6] ...
	I0729 10:54:30.238163    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4494551802a6"
	I0729 10:54:30.254092    8358 logs.go:123] Gathering logs for coredns [fbff0b09b3af] ...
	I0729 10:54:30.254106    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff0b09b3af"
	I0729 10:54:30.270552    8358 logs.go:123] Gathering logs for dmesg ...
	I0729 10:54:30.270565    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:54:30.274590    8358 logs.go:123] Gathering logs for etcd [68a6d48feaae] ...
	I0729 10:54:30.274596    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68a6d48feaae"
	I0729 10:54:30.288436    8358 logs.go:123] Gathering logs for storage-provisioner [7610cd881aeb] ...
	I0729 10:54:30.288446    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7610cd881aeb"
	I0729 10:54:32.801980    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:54:35.110341    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:54:35.110540    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:54:35.132822    8229 logs.go:276] 1 containers: [120daa333441]
	I0729 10:54:35.132941    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:54:35.148171    8229 logs.go:276] 1 containers: [6cfb0c541a62]
	I0729 10:54:35.148237    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:54:35.161214    8229 logs.go:276] 2 containers: [f179b7a6916f 74a37cb60d42]
	I0729 10:54:35.161292    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:54:35.172278    8229 logs.go:276] 1 containers: [60be90d0d8ea]
	I0729 10:54:35.172350    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:54:35.183022    8229 logs.go:276] 1 containers: [5a4490e00797]
	I0729 10:54:35.183090    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:54:35.193637    8229 logs.go:276] 1 containers: [a047283c1326]
	I0729 10:54:35.193699    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:54:35.203364    8229 logs.go:276] 0 containers: []
	W0729 10:54:35.203376    8229 logs.go:278] No container was found matching "kindnet"
	I0729 10:54:35.203433    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:54:35.213841    8229 logs.go:276] 1 containers: [585ed2b764f6]
	I0729 10:54:35.213857    8229 logs.go:123] Gathering logs for kubelet ...
	I0729 10:54:35.213863    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:54:35.252566    8229 logs.go:123] Gathering logs for dmesg ...
	I0729 10:54:35.252577    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:54:35.257006    8229 logs.go:123] Gathering logs for coredns [f179b7a6916f] ...
	I0729 10:54:35.257014    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f179b7a6916f"
	I0729 10:54:35.269710    8229 logs.go:123] Gathering logs for kube-controller-manager [a047283c1326] ...
	I0729 10:54:35.269721    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a047283c1326"
	I0729 10:54:35.293602    8229 logs.go:123] Gathering logs for Docker ...
	I0729 10:54:35.293611    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:54:35.319417    8229 logs.go:123] Gathering logs for kube-proxy [5a4490e00797] ...
	I0729 10:54:35.319428    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a4490e00797"
	I0729 10:54:35.331491    8229 logs.go:123] Gathering logs for storage-provisioner [585ed2b764f6] ...
	I0729 10:54:35.331500    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585ed2b764f6"
	I0729 10:54:35.343025    8229 logs.go:123] Gathering logs for container status ...
	I0729 10:54:35.343036    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:54:35.354693    8229 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:54:35.354703    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:54:35.390511    8229 logs.go:123] Gathering logs for kube-apiserver [120daa333441] ...
	I0729 10:54:35.390525    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 120daa333441"
	I0729 10:54:35.405088    8229 logs.go:123] Gathering logs for etcd [6cfb0c541a62] ...
	I0729 10:54:35.405098    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cfb0c541a62"
	I0729 10:54:35.430898    8229 logs.go:123] Gathering logs for coredns [74a37cb60d42] ...
	I0729 10:54:35.430909    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74a37cb60d42"
	I0729 10:54:35.443819    8229 logs.go:123] Gathering logs for kube-scheduler [60be90d0d8ea] ...
	I0729 10:54:35.443833    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60be90d0d8ea"
	I0729 10:54:37.804239    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:54:37.804359    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:54:37.815845    8358 logs.go:276] 2 containers: [14a4ff9d95a0 2afc138a6e36]
	I0729 10:54:37.815926    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:54:37.834573    8358 logs.go:276] 2 containers: [68a6d48feaae 4494551802a6]
	I0729 10:54:37.834651    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:54:37.847048    8358 logs.go:276] 1 containers: [fbff0b09b3af]
	I0729 10:54:37.847116    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:54:37.857564    8358 logs.go:276] 2 containers: [8ad4866a16b3 a9a637b09ebc]
	I0729 10:54:37.857636    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:54:37.868353    8358 logs.go:276] 1 containers: [cc82106fc9da]
	I0729 10:54:37.868423    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:54:37.879125    8358 logs.go:276] 2 containers: [eee01c406f30 81df750d149b]
	I0729 10:54:37.879198    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:54:37.893600    8358 logs.go:276] 0 containers: []
	W0729 10:54:37.893612    8358 logs.go:278] No container was found matching "kindnet"
	I0729 10:54:37.893672    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:54:37.904316    8358 logs.go:276] 2 containers: [7610cd881aeb e2e7fb6e4b2d]
	I0729 10:54:37.904333    8358 logs.go:123] Gathering logs for kube-apiserver [2afc138a6e36] ...
	I0729 10:54:37.904339    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2afc138a6e36"
	I0729 10:54:37.961099    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:54:37.947014    8358 logs.go:123] Gathering logs for etcd [4494551802a6] ...
	I0729 10:54:37.947025    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4494551802a6"
	I0729 10:54:37.961094    8358 logs.go:123] Gathering logs for kube-scheduler [a9a637b09ebc] ...
	I0729 10:54:37.961105    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9a637b09ebc"
	I0729 10:54:37.977655    8358 logs.go:123] Gathering logs for kube-proxy [cc82106fc9da] ...
	I0729 10:54:37.977671    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc82106fc9da"
	I0729 10:54:37.989233    8358 logs.go:123] Gathering logs for kube-controller-manager [eee01c406f30] ...
	I0729 10:54:37.989250    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eee01c406f30"
	I0729 10:54:38.005956    8358 logs.go:123] Gathering logs for kube-controller-manager [81df750d149b] ...
	I0729 10:54:38.005969    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81df750d149b"
	I0729 10:54:38.019889    8358 logs.go:123] Gathering logs for dmesg ...
	I0729 10:54:38.019904    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:54:38.024738    8358 logs.go:123] Gathering logs for container status ...
	I0729 10:54:38.024748    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:54:38.036571    8358 logs.go:123] Gathering logs for etcd [68a6d48feaae] ...
	I0729 10:54:38.036582    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68a6d48feaae"
	I0729 10:54:38.050661    8358 logs.go:123] Gathering logs for kube-scheduler [8ad4866a16b3] ...
	I0729 10:54:38.050675    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ad4866a16b3"
	I0729 10:54:38.062898    8358 logs.go:123] Gathering logs for storage-provisioner [7610cd881aeb] ...
	I0729 10:54:38.062906    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7610cd881aeb"
	I0729 10:54:38.074265    8358 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:54:38.074280    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:54:38.112601    8358 logs.go:123] Gathering logs for kube-apiserver [14a4ff9d95a0] ...
	I0729 10:54:38.112614    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14a4ff9d95a0"
	I0729 10:54:38.127093    8358 logs.go:123] Gathering logs for coredns [fbff0b09b3af] ...
	I0729 10:54:38.127107    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff0b09b3af"
	I0729 10:54:38.138750    8358 logs.go:123] Gathering logs for storage-provisioner [e2e7fb6e4b2d] ...
	I0729 10:54:38.138761    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2e7fb6e4b2d"
	I0729 10:54:38.149765    8358 logs.go:123] Gathering logs for Docker ...
	I0729 10:54:38.149774    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:54:38.174102    8358 logs.go:123] Gathering logs for kubelet ...
	I0729 10:54:38.174108    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:54:40.714114    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:54:42.963287    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:54:42.963445    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:54:42.976943    8229 logs.go:276] 1 containers: [120daa333441]
	I0729 10:54:42.977013    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:54:42.988070    8229 logs.go:276] 1 containers: [6cfb0c541a62]
	I0729 10:54:42.988144    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:54:42.998520    8229 logs.go:276] 2 containers: [f179b7a6916f 74a37cb60d42]
	I0729 10:54:42.998582    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:54:43.009084    8229 logs.go:276] 1 containers: [60be90d0d8ea]
	I0729 10:54:43.009149    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:54:43.019677    8229 logs.go:276] 1 containers: [5a4490e00797]
	I0729 10:54:43.019750    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:54:43.030319    8229 logs.go:276] 1 containers: [a047283c1326]
	I0729 10:54:43.030404    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:54:43.040979    8229 logs.go:276] 0 containers: []
	W0729 10:54:43.040990    8229 logs.go:278] No container was found matching "kindnet"
	I0729 10:54:43.041049    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:54:43.051230    8229 logs.go:276] 1 containers: [585ed2b764f6]
	I0729 10:54:43.051245    8229 logs.go:123] Gathering logs for dmesg ...
	I0729 10:54:43.051252    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:54:43.056394    8229 logs.go:123] Gathering logs for etcd [6cfb0c541a62] ...
	I0729 10:54:43.056401    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cfb0c541a62"
	I0729 10:54:43.070235    8229 logs.go:123] Gathering logs for coredns [f179b7a6916f] ...
	I0729 10:54:43.070245    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f179b7a6916f"
	I0729 10:54:43.081680    8229 logs.go:123] Gathering logs for coredns [74a37cb60d42] ...
	I0729 10:54:43.081691    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74a37cb60d42"
	I0729 10:54:43.093229    8229 logs.go:123] Gathering logs for kube-scheduler [60be90d0d8ea] ...
	I0729 10:54:43.093240    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60be90d0d8ea"
	I0729 10:54:43.108075    8229 logs.go:123] Gathering logs for kube-proxy [5a4490e00797] ...
	I0729 10:54:43.108086    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a4490e00797"
	I0729 10:54:43.119747    8229 logs.go:123] Gathering logs for container status ...
	I0729 10:54:43.119758    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:54:43.139073    8229 logs.go:123] Gathering logs for kubelet ...
	I0729 10:54:43.139084    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:54:43.176108    8229 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:54:43.176120    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:54:43.212641    8229 logs.go:123] Gathering logs for kube-apiserver [120daa333441] ...
	I0729 10:54:43.212652    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 120daa333441"
	I0729 10:54:43.232272    8229 logs.go:123] Gathering logs for kube-controller-manager [a047283c1326] ...
	I0729 10:54:43.232281    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a047283c1326"
	I0729 10:54:43.252455    8229 logs.go:123] Gathering logs for storage-provisioner [585ed2b764f6] ...
	I0729 10:54:43.252467    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585ed2b764f6"
	I0729 10:54:43.264710    8229 logs.go:123] Gathering logs for Docker ...
	I0729 10:54:43.264721    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:54:45.716351    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:54:45.716526    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:54:45.731246    8358 logs.go:276] 2 containers: [14a4ff9d95a0 2afc138a6e36]
	I0729 10:54:45.731328    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:54:45.743386    8358 logs.go:276] 2 containers: [68a6d48feaae 4494551802a6]
	I0729 10:54:45.743459    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:54:45.753815    8358 logs.go:276] 1 containers: [fbff0b09b3af]
	I0729 10:54:45.753880    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:54:45.764376    8358 logs.go:276] 2 containers: [8ad4866a16b3 a9a637b09ebc]
	I0729 10:54:45.764446    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:54:45.775917    8358 logs.go:276] 1 containers: [cc82106fc9da]
	I0729 10:54:45.775980    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:54:45.786580    8358 logs.go:276] 2 containers: [eee01c406f30 81df750d149b]
	I0729 10:54:45.786645    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:54:45.796975    8358 logs.go:276] 0 containers: []
	W0729 10:54:45.796988    8358 logs.go:278] No container was found matching "kindnet"
	I0729 10:54:45.797044    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:54:45.810903    8358 logs.go:276] 2 containers: [7610cd881aeb e2e7fb6e4b2d]
	I0729 10:54:45.810921    8358 logs.go:123] Gathering logs for etcd [68a6d48feaae] ...
	I0729 10:54:45.810927    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68a6d48feaae"
	I0729 10:54:45.825613    8358 logs.go:123] Gathering logs for etcd [4494551802a6] ...
	I0729 10:54:45.825625    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4494551802a6"
	I0729 10:54:45.840056    8358 logs.go:123] Gathering logs for coredns [fbff0b09b3af] ...
	I0729 10:54:45.840067    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff0b09b3af"
	I0729 10:54:45.851407    8358 logs.go:123] Gathering logs for kube-proxy [cc82106fc9da] ...
	I0729 10:54:45.851419    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc82106fc9da"
	I0729 10:54:45.863171    8358 logs.go:123] Gathering logs for dmesg ...
	I0729 10:54:45.863181    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:54:45.867606    8358 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:54:45.867613    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:54:45.902739    8358 logs.go:123] Gathering logs for kube-apiserver [14a4ff9d95a0] ...
	I0729 10:54:45.902749    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14a4ff9d95a0"
	I0729 10:54:45.920805    8358 logs.go:123] Gathering logs for kube-apiserver [2afc138a6e36] ...
	I0729 10:54:45.920816    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2afc138a6e36"
	I0729 10:54:45.959740    8358 logs.go:123] Gathering logs for kube-controller-manager [eee01c406f30] ...
	I0729 10:54:45.959752    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eee01c406f30"
	I0729 10:54:45.982260    8358 logs.go:123] Gathering logs for Docker ...
	I0729 10:54:45.982271    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:54:46.005091    8358 logs.go:123] Gathering logs for kubelet ...
	I0729 10:54:46.005101    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:54:46.041280    8358 logs.go:123] Gathering logs for kube-scheduler [8ad4866a16b3] ...
	I0729 10:54:46.041291    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ad4866a16b3"
	I0729 10:54:46.052928    8358 logs.go:123] Gathering logs for container status ...
	I0729 10:54:46.052940    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:54:46.065583    8358 logs.go:123] Gathering logs for kube-scheduler [a9a637b09ebc] ...
	I0729 10:54:46.065594    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9a637b09ebc"
	I0729 10:54:46.081070    8358 logs.go:123] Gathering logs for kube-controller-manager [81df750d149b] ...
	I0729 10:54:46.081083    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81df750d149b"
	I0729 10:54:46.099136    8358 logs.go:123] Gathering logs for storage-provisioner [7610cd881aeb] ...
	I0729 10:54:46.099150    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7610cd881aeb"
	I0729 10:54:46.110962    8358 logs.go:123] Gathering logs for storage-provisioner [e2e7fb6e4b2d] ...
	I0729 10:54:46.110973    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2e7fb6e4b2d"
	I0729 10:54:45.789750    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:54:48.624044    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:54:50.791839    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:54:50.792006    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:54:50.809115    8229 logs.go:276] 1 containers: [120daa333441]
	I0729 10:54:50.809208    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:54:50.822153    8229 logs.go:276] 1 containers: [6cfb0c541a62]
	I0729 10:54:50.822230    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:54:50.834124    8229 logs.go:276] 4 containers: [571220e0392b 19d652647dcb f179b7a6916f 74a37cb60d42]
	I0729 10:54:50.834201    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:54:50.844514    8229 logs.go:276] 1 containers: [60be90d0d8ea]
	I0729 10:54:50.844581    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:54:50.854858    8229 logs.go:276] 1 containers: [5a4490e00797]
	I0729 10:54:50.854925    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:54:50.865288    8229 logs.go:276] 1 containers: [a047283c1326]
	I0729 10:54:50.865357    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:54:50.875797    8229 logs.go:276] 0 containers: []
	W0729 10:54:50.875809    8229 logs.go:278] No container was found matching "kindnet"
	I0729 10:54:50.875866    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:54:50.886715    8229 logs.go:276] 1 containers: [585ed2b764f6]
	I0729 10:54:50.886734    8229 logs.go:123] Gathering logs for storage-provisioner [585ed2b764f6] ...
	I0729 10:54:50.886740    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585ed2b764f6"
	I0729 10:54:50.898487    8229 logs.go:123] Gathering logs for dmesg ...
	I0729 10:54:50.898499    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:54:50.903556    8229 logs.go:123] Gathering logs for coredns [74a37cb60d42] ...
	I0729 10:54:50.903565    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74a37cb60d42"
	I0729 10:54:50.915495    8229 logs.go:123] Gathering logs for kube-scheduler [60be90d0d8ea] ...
	I0729 10:54:50.915508    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60be90d0d8ea"
	I0729 10:54:50.929650    8229 logs.go:123] Gathering logs for kube-apiserver [120daa333441] ...
	I0729 10:54:50.929659    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 120daa333441"
	I0729 10:54:50.944299    8229 logs.go:123] Gathering logs for coredns [f179b7a6916f] ...
	I0729 10:54:50.944311    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f179b7a6916f"
	I0729 10:54:50.956126    8229 logs.go:123] Gathering logs for kube-proxy [5a4490e00797] ...
	I0729 10:54:50.956139    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a4490e00797"
	I0729 10:54:50.967761    8229 logs.go:123] Gathering logs for kube-controller-manager [a047283c1326] ...
	I0729 10:54:50.967776    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a047283c1326"
	I0729 10:54:50.985362    8229 logs.go:123] Gathering logs for Docker ...
	I0729 10:54:50.985375    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:54:51.010499    8229 logs.go:123] Gathering logs for container status ...
	I0729 10:54:51.010514    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:54:51.022337    8229 logs.go:123] Gathering logs for kubelet ...
	I0729 10:54:51.022349    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:54:51.059695    8229 logs.go:123] Gathering logs for coredns [571220e0392b] ...
	I0729 10:54:51.059706    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571220e0392b"
	I0729 10:54:51.071357    8229 logs.go:123] Gathering logs for coredns [19d652647dcb] ...
	I0729 10:54:51.071368    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19d652647dcb"
	I0729 10:54:51.082319    8229 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:54:51.082338    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:54:51.119787    8229 logs.go:123] Gathering logs for etcd [6cfb0c541a62] ...
	I0729 10:54:51.119800    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cfb0c541a62"
	I0729 10:54:53.636613    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:54:53.626229    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:54:53.626466    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:54:53.645334    8358 logs.go:276] 2 containers: [14a4ff9d95a0 2afc138a6e36]
	I0729 10:54:53.645418    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:54:53.659413    8358 logs.go:276] 2 containers: [68a6d48feaae 4494551802a6]
	I0729 10:54:53.659489    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:54:53.671905    8358 logs.go:276] 1 containers: [fbff0b09b3af]
	I0729 10:54:53.671970    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:54:53.682559    8358 logs.go:276] 2 containers: [8ad4866a16b3 a9a637b09ebc]
	I0729 10:54:53.682625    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:54:53.692995    8358 logs.go:276] 1 containers: [cc82106fc9da]
	I0729 10:54:53.693058    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:54:53.703624    8358 logs.go:276] 2 containers: [eee01c406f30 81df750d149b]
	I0729 10:54:53.703697    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:54:53.714250    8358 logs.go:276] 0 containers: []
	W0729 10:54:53.714261    8358 logs.go:278] No container was found matching "kindnet"
	I0729 10:54:53.714316    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:54:53.725119    8358 logs.go:276] 2 containers: [7610cd881aeb e2e7fb6e4b2d]
	I0729 10:54:53.725136    8358 logs.go:123] Gathering logs for kube-apiserver [14a4ff9d95a0] ...
	I0729 10:54:53.725143    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14a4ff9d95a0"
	I0729 10:54:53.739315    8358 logs.go:123] Gathering logs for etcd [4494551802a6] ...
	I0729 10:54:53.739329    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4494551802a6"
	I0729 10:54:53.752875    8358 logs.go:123] Gathering logs for kube-scheduler [8ad4866a16b3] ...
	I0729 10:54:53.752890    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ad4866a16b3"
	I0729 10:54:53.764189    8358 logs.go:123] Gathering logs for kube-scheduler [a9a637b09ebc] ...
	I0729 10:54:53.764200    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9a637b09ebc"
	I0729 10:54:53.780143    8358 logs.go:123] Gathering logs for kube-controller-manager [eee01c406f30] ...
	I0729 10:54:53.780158    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eee01c406f30"
	I0729 10:54:53.796787    8358 logs.go:123] Gathering logs for kube-controller-manager [81df750d149b] ...
	I0729 10:54:53.796801    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81df750d149b"
	I0729 10:54:53.810034    8358 logs.go:123] Gathering logs for storage-provisioner [7610cd881aeb] ...
	I0729 10:54:53.810044    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7610cd881aeb"
	I0729 10:54:53.821464    8358 logs.go:123] Gathering logs for storage-provisioner [e2e7fb6e4b2d] ...
	I0729 10:54:53.821476    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2e7fb6e4b2d"
	I0729 10:54:53.833127    8358 logs.go:123] Gathering logs for container status ...
	I0729 10:54:53.833139    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:54:53.845011    8358 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:54:53.845027    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:54:53.880168    8358 logs.go:123] Gathering logs for coredns [fbff0b09b3af] ...
	I0729 10:54:53.880179    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff0b09b3af"
	I0729 10:54:53.891641    8358 logs.go:123] Gathering logs for kube-apiserver [2afc138a6e36] ...
	I0729 10:54:53.891653    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2afc138a6e36"
	I0729 10:54:53.928862    8358 logs.go:123] Gathering logs for etcd [68a6d48feaae] ...
	I0729 10:54:53.928874    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68a6d48feaae"
	I0729 10:54:53.943177    8358 logs.go:123] Gathering logs for kube-proxy [cc82106fc9da] ...
	I0729 10:54:53.943191    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc82106fc9da"
	I0729 10:54:53.955089    8358 logs.go:123] Gathering logs for Docker ...
	I0729 10:54:53.955100    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:54:53.979581    8358 logs.go:123] Gathering logs for kubelet ...
	I0729 10:54:53.979589    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:54:54.018430    8358 logs.go:123] Gathering logs for dmesg ...
	I0729 10:54:54.018437    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:54:56.524456    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:54:58.636988    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:54:58.637139    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:54:58.657024    8229 logs.go:276] 1 containers: [120daa333441]
	I0729 10:54:58.657099    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:54:58.669636    8229 logs.go:276] 1 containers: [6cfb0c541a62]
	I0729 10:54:58.669711    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:54:58.681033    8229 logs.go:276] 4 containers: [571220e0392b 19d652647dcb f179b7a6916f 74a37cb60d42]
	I0729 10:54:58.681106    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:54:58.691916    8229 logs.go:276] 1 containers: [60be90d0d8ea]
	I0729 10:54:58.691989    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:54:58.706482    8229 logs.go:276] 1 containers: [5a4490e00797]
	I0729 10:54:58.706551    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:54:58.717028    8229 logs.go:276] 1 containers: [a047283c1326]
	I0729 10:54:58.717102    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:54:58.727390    8229 logs.go:276] 0 containers: []
	W0729 10:54:58.727402    8229 logs.go:278] No container was found matching "kindnet"
	I0729 10:54:58.727456    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:54:58.743222    8229 logs.go:276] 1 containers: [585ed2b764f6]
	I0729 10:54:58.743239    8229 logs.go:123] Gathering logs for kubelet ...
	I0729 10:54:58.743245    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:54:58.780062    8229 logs.go:123] Gathering logs for kube-apiserver [120daa333441] ...
	I0729 10:54:58.780073    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 120daa333441"
	I0729 10:54:58.794694    8229 logs.go:123] Gathering logs for etcd [6cfb0c541a62] ...
	I0729 10:54:58.794706    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cfb0c541a62"
	I0729 10:54:58.809049    8229 logs.go:123] Gathering logs for kube-scheduler [60be90d0d8ea] ...
	I0729 10:54:58.809059    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60be90d0d8ea"
	I0729 10:54:58.824038    8229 logs.go:123] Gathering logs for kube-proxy [5a4490e00797] ...
	I0729 10:54:58.824051    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a4490e00797"
	I0729 10:54:58.839633    8229 logs.go:123] Gathering logs for coredns [f179b7a6916f] ...
	I0729 10:54:58.839645    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f179b7a6916f"
	I0729 10:54:58.852359    8229 logs.go:123] Gathering logs for kube-controller-manager [a047283c1326] ...
	I0729 10:54:58.852372    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a047283c1326"
	I0729 10:54:58.872131    8229 logs.go:123] Gathering logs for dmesg ...
	I0729 10:54:58.872140    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:54:58.876696    8229 logs.go:123] Gathering logs for coredns [571220e0392b] ...
	I0729 10:54:58.876703    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571220e0392b"
	I0729 10:54:58.888713    8229 logs.go:123] Gathering logs for coredns [74a37cb60d42] ...
	I0729 10:54:58.888724    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74a37cb60d42"
	I0729 10:54:58.900629    8229 logs.go:123] Gathering logs for Docker ...
	I0729 10:54:58.900642    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:54:58.924492    8229 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:54:58.924500    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:54:58.974656    8229 logs.go:123] Gathering logs for coredns [19d652647dcb] ...
	I0729 10:54:58.974668    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19d652647dcb"
	I0729 10:54:58.986576    8229 logs.go:123] Gathering logs for storage-provisioner [585ed2b764f6] ...
	I0729 10:54:58.986586    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585ed2b764f6"
	I0729 10:54:58.999222    8229 logs.go:123] Gathering logs for container status ...
	I0729 10:54:58.999232    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:55:01.526682    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:55:01.526817    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:55:01.547109    8358 logs.go:276] 2 containers: [14a4ff9d95a0 2afc138a6e36]
	I0729 10:55:01.547213    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:55:01.565531    8358 logs.go:276] 2 containers: [68a6d48feaae 4494551802a6]
	I0729 10:55:01.565620    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:55:01.577931    8358 logs.go:276] 1 containers: [fbff0b09b3af]
	I0729 10:55:01.578004    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:55:01.592342    8358 logs.go:276] 2 containers: [8ad4866a16b3 a9a637b09ebc]
	I0729 10:55:01.592418    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:55:01.602862    8358 logs.go:276] 1 containers: [cc82106fc9da]
	I0729 10:55:01.602932    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:55:01.613020    8358 logs.go:276] 2 containers: [eee01c406f30 81df750d149b]
	I0729 10:55:01.613091    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:55:01.623133    8358 logs.go:276] 0 containers: []
	W0729 10:55:01.623145    8358 logs.go:278] No container was found matching "kindnet"
	I0729 10:55:01.623204    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:55:01.633571    8358 logs.go:276] 2 containers: [7610cd881aeb e2e7fb6e4b2d]
	I0729 10:55:01.633590    8358 logs.go:123] Gathering logs for kube-apiserver [14a4ff9d95a0] ...
	I0729 10:55:01.633597    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14a4ff9d95a0"
	I0729 10:55:01.647218    8358 logs.go:123] Gathering logs for coredns [fbff0b09b3af] ...
	I0729 10:55:01.647228    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff0b09b3af"
	I0729 10:55:01.661917    8358 logs.go:123] Gathering logs for kube-proxy [cc82106fc9da] ...
	I0729 10:55:01.661929    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc82106fc9da"
	I0729 10:55:01.674906    8358 logs.go:123] Gathering logs for storage-provisioner [7610cd881aeb] ...
	I0729 10:55:01.674916    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7610cd881aeb"
	I0729 10:55:01.687015    8358 logs.go:123] Gathering logs for kubelet ...
	I0729 10:55:01.687025    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:55:01.723937    8358 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:55:01.723950    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:55:01.758717    8358 logs.go:123] Gathering logs for storage-provisioner [e2e7fb6e4b2d] ...
	I0729 10:55:01.758729    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2e7fb6e4b2d"
	I0729 10:55:01.770488    8358 logs.go:123] Gathering logs for kube-apiserver [2afc138a6e36] ...
	I0729 10:55:01.770501    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2afc138a6e36"
	I0729 10:55:01.808557    8358 logs.go:123] Gathering logs for kube-controller-manager [81df750d149b] ...
	I0729 10:55:01.808569    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81df750d149b"
	I0729 10:55:01.822121    8358 logs.go:123] Gathering logs for kube-scheduler [8ad4866a16b3] ...
	I0729 10:55:01.822136    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ad4866a16b3"
	I0729 10:55:01.834456    8358 logs.go:123] Gathering logs for kube-scheduler [a9a637b09ebc] ...
	I0729 10:55:01.834466    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9a637b09ebc"
	I0729 10:55:01.850533    8358 logs.go:123] Gathering logs for kube-controller-manager [eee01c406f30] ...
	I0729 10:55:01.850546    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eee01c406f30"
	I0729 10:55:01.868933    8358 logs.go:123] Gathering logs for Docker ...
	I0729 10:55:01.868946    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:55:01.893348    8358 logs.go:123] Gathering logs for dmesg ...
	I0729 10:55:01.893356    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:55:01.897366    8358 logs.go:123] Gathering logs for etcd [4494551802a6] ...
	I0729 10:55:01.897371    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4494551802a6"
	I0729 10:55:01.911292    8358 logs.go:123] Gathering logs for etcd [68a6d48feaae] ...
	I0729 10:55:01.911301    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68a6d48feaae"
	I0729 10:55:01.925170    8358 logs.go:123] Gathering logs for container status ...
	I0729 10:55:01.925186    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:55:01.513690    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:55:04.439080    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:55:06.515926    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:55:06.516126    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:55:06.540726    8229 logs.go:276] 1 containers: [120daa333441]
	I0729 10:55:06.540837    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:55:06.557642    8229 logs.go:276] 1 containers: [6cfb0c541a62]
	I0729 10:55:06.557720    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:55:06.570618    8229 logs.go:276] 4 containers: [571220e0392b 19d652647dcb f179b7a6916f 74a37cb60d42]
	I0729 10:55:06.570691    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:55:06.581783    8229 logs.go:276] 1 containers: [60be90d0d8ea]
	I0729 10:55:06.581853    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:55:06.592169    8229 logs.go:276] 1 containers: [5a4490e00797]
	I0729 10:55:06.592231    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:55:06.602496    8229 logs.go:276] 1 containers: [a047283c1326]
	I0729 10:55:06.602561    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:55:06.614762    8229 logs.go:276] 0 containers: []
	W0729 10:55:06.614777    8229 logs.go:278] No container was found matching "kindnet"
	I0729 10:55:06.614835    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:55:06.630792    8229 logs.go:276] 1 containers: [585ed2b764f6]
	I0729 10:55:06.630809    8229 logs.go:123] Gathering logs for kubelet ...
	I0729 10:55:06.630816    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:55:06.668652    8229 logs.go:123] Gathering logs for coredns [74a37cb60d42] ...
	I0729 10:55:06.668663    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74a37cb60d42"
	I0729 10:55:06.683423    8229 logs.go:123] Gathering logs for kube-controller-manager [a047283c1326] ...
	I0729 10:55:06.683436    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a047283c1326"
	I0729 10:55:06.705832    8229 logs.go:123] Gathering logs for Docker ...
	I0729 10:55:06.705843    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:55:06.732151    8229 logs.go:123] Gathering logs for dmesg ...
	I0729 10:55:06.732160    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:55:06.736844    8229 logs.go:123] Gathering logs for kube-apiserver [120daa333441] ...
	I0729 10:55:06.736851    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 120daa333441"
	I0729 10:55:06.756441    8229 logs.go:123] Gathering logs for etcd [6cfb0c541a62] ...
	I0729 10:55:06.756450    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cfb0c541a62"
	I0729 10:55:06.770875    8229 logs.go:123] Gathering logs for coredns [571220e0392b] ...
	I0729 10:55:06.770885    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571220e0392b"
	I0729 10:55:06.782433    8229 logs.go:123] Gathering logs for coredns [f179b7a6916f] ...
	I0729 10:55:06.782443    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f179b7a6916f"
	I0729 10:55:06.794419    8229 logs.go:123] Gathering logs for kube-scheduler [60be90d0d8ea] ...
	I0729 10:55:06.794430    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60be90d0d8ea"
	I0729 10:55:06.808835    8229 logs.go:123] Gathering logs for kube-proxy [5a4490e00797] ...
	I0729 10:55:06.808845    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a4490e00797"
	I0729 10:55:06.820967    8229 logs.go:123] Gathering logs for container status ...
	I0729 10:55:06.820978    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:55:06.832495    8229 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:55:06.832505    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:55:06.867469    8229 logs.go:123] Gathering logs for coredns [19d652647dcb] ...
	I0729 10:55:06.867482    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19d652647dcb"
	I0729 10:55:06.878603    8229 logs.go:123] Gathering logs for storage-provisioner [585ed2b764f6] ...
	I0729 10:55:06.878616    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585ed2b764f6"
	I0729 10:55:09.392438    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:55:09.441708    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:55:09.441993    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:55:09.474909    8358 logs.go:276] 2 containers: [14a4ff9d95a0 2afc138a6e36]
	I0729 10:55:09.475045    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:55:09.494069    8358 logs.go:276] 2 containers: [68a6d48feaae 4494551802a6]
	I0729 10:55:09.494167    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:55:09.508493    8358 logs.go:276] 1 containers: [fbff0b09b3af]
	I0729 10:55:09.508576    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:55:09.521286    8358 logs.go:276] 2 containers: [8ad4866a16b3 a9a637b09ebc]
	I0729 10:55:09.521369    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:55:09.533982    8358 logs.go:276] 1 containers: [cc82106fc9da]
	I0729 10:55:09.534052    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:55:09.544888    8358 logs.go:276] 2 containers: [eee01c406f30 81df750d149b]
	I0729 10:55:09.544958    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:55:09.557206    8358 logs.go:276] 0 containers: []
	W0729 10:55:09.557219    8358 logs.go:278] No container was found matching "kindnet"
	I0729 10:55:09.557284    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:55:09.572044    8358 logs.go:276] 2 containers: [7610cd881aeb e2e7fb6e4b2d]
	I0729 10:55:09.572061    8358 logs.go:123] Gathering logs for kube-apiserver [2afc138a6e36] ...
	I0729 10:55:09.572066    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2afc138a6e36"
	I0729 10:55:09.609486    8358 logs.go:123] Gathering logs for kube-scheduler [a9a637b09ebc] ...
	I0729 10:55:09.609497    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9a637b09ebc"
	I0729 10:55:09.625222    8358 logs.go:123] Gathering logs for Docker ...
	I0729 10:55:09.625231    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:55:09.648407    8358 logs.go:123] Gathering logs for container status ...
	I0729 10:55:09.648418    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:55:09.660183    8358 logs.go:123] Gathering logs for kubelet ...
	I0729 10:55:09.660192    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:55:09.698692    8358 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:55:09.698701    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:55:09.735505    8358 logs.go:123] Gathering logs for kube-scheduler [8ad4866a16b3] ...
	I0729 10:55:09.735518    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ad4866a16b3"
	I0729 10:55:09.748128    8358 logs.go:123] Gathering logs for kube-proxy [cc82106fc9da] ...
	I0729 10:55:09.748139    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc82106fc9da"
	I0729 10:55:09.760310    8358 logs.go:123] Gathering logs for kube-controller-manager [eee01c406f30] ...
	I0729 10:55:09.760321    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eee01c406f30"
	I0729 10:55:09.777811    8358 logs.go:123] Gathering logs for storage-provisioner [7610cd881aeb] ...
	I0729 10:55:09.777823    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7610cd881aeb"
	I0729 10:55:09.789366    8358 logs.go:123] Gathering logs for kube-apiserver [14a4ff9d95a0] ...
	I0729 10:55:09.789376    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14a4ff9d95a0"
	I0729 10:55:09.803564    8358 logs.go:123] Gathering logs for etcd [68a6d48feaae] ...
	I0729 10:55:09.803577    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68a6d48feaae"
	I0729 10:55:09.817626    8358 logs.go:123] Gathering logs for dmesg ...
	I0729 10:55:09.817637    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:55:09.821970    8358 logs.go:123] Gathering logs for etcd [4494551802a6] ...
	I0729 10:55:09.821978    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4494551802a6"
	I0729 10:55:09.836776    8358 logs.go:123] Gathering logs for coredns [fbff0b09b3af] ...
	I0729 10:55:09.836786    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff0b09b3af"
	I0729 10:55:09.848287    8358 logs.go:123] Gathering logs for kube-controller-manager [81df750d149b] ...
	I0729 10:55:09.848298    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81df750d149b"
	I0729 10:55:09.862643    8358 logs.go:123] Gathering logs for storage-provisioner [e2e7fb6e4b2d] ...
	I0729 10:55:09.862652    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2e7fb6e4b2d"
	I0729 10:55:12.376552    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:55:14.394977    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:55:14.395201    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:55:14.419166    8229 logs.go:276] 1 containers: [120daa333441]
	I0729 10:55:14.419291    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:55:14.436240    8229 logs.go:276] 1 containers: [6cfb0c541a62]
	I0729 10:55:14.436318    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:55:14.449563    8229 logs.go:276] 4 containers: [571220e0392b 19d652647dcb f179b7a6916f 74a37cb60d42]
	I0729 10:55:14.449642    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:55:14.460679    8229 logs.go:276] 1 containers: [60be90d0d8ea]
	I0729 10:55:14.460750    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:55:14.471330    8229 logs.go:276] 1 containers: [5a4490e00797]
	I0729 10:55:14.471397    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:55:14.482537    8229 logs.go:276] 1 containers: [a047283c1326]
	I0729 10:55:14.482606    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:55:14.497148    8229 logs.go:276] 0 containers: []
	W0729 10:55:14.497160    8229 logs.go:278] No container was found matching "kindnet"
	I0729 10:55:14.497220    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:55:14.507970    8229 logs.go:276] 1 containers: [585ed2b764f6]
	I0729 10:55:14.507986    8229 logs.go:123] Gathering logs for coredns [f179b7a6916f] ...
	I0729 10:55:14.507991    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f179b7a6916f"
	I0729 10:55:14.519386    8229 logs.go:123] Gathering logs for coredns [74a37cb60d42] ...
	I0729 10:55:14.519401    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74a37cb60d42"
	I0729 10:55:14.531183    8229 logs.go:123] Gathering logs for etcd [6cfb0c541a62] ...
	I0729 10:55:14.531193    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cfb0c541a62"
	I0729 10:55:14.544763    8229 logs.go:123] Gathering logs for Docker ...
	I0729 10:55:14.544774    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:55:14.570177    8229 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:55:14.570185    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:55:14.604384    8229 logs.go:123] Gathering logs for kube-controller-manager [a047283c1326] ...
	I0729 10:55:14.604398    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a047283c1326"
	I0729 10:55:14.621918    8229 logs.go:123] Gathering logs for storage-provisioner [585ed2b764f6] ...
	I0729 10:55:14.621928    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585ed2b764f6"
	I0729 10:55:14.632984    8229 logs.go:123] Gathering logs for container status ...
	I0729 10:55:14.632995    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:55:14.644507    8229 logs.go:123] Gathering logs for kubelet ...
	I0729 10:55:14.644517    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:55:14.682140    8229 logs.go:123] Gathering logs for dmesg ...
	I0729 10:55:14.682151    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:55:14.693452    8229 logs.go:123] Gathering logs for kube-apiserver [120daa333441] ...
	I0729 10:55:14.693468    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 120daa333441"
	I0729 10:55:14.718910    8229 logs.go:123] Gathering logs for coredns [571220e0392b] ...
	I0729 10:55:14.718923    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571220e0392b"
	I0729 10:55:14.730665    8229 logs.go:123] Gathering logs for coredns [19d652647dcb] ...
	I0729 10:55:14.730676    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19d652647dcb"
	I0729 10:55:14.742632    8229 logs.go:123] Gathering logs for kube-scheduler [60be90d0d8ea] ...
	I0729 10:55:14.742646    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60be90d0d8ea"
	I0729 10:55:14.757282    8229 logs.go:123] Gathering logs for kube-proxy [5a4490e00797] ...
	I0729 10:55:14.757291    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a4490e00797"
	I0729 10:55:17.378808    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:55:17.378955    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:55:17.401739    8358 logs.go:276] 2 containers: [14a4ff9d95a0 2afc138a6e36]
	I0729 10:55:17.401836    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:55:17.416797    8358 logs.go:276] 2 containers: [68a6d48feaae 4494551802a6]
	I0729 10:55:17.416878    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:55:17.429284    8358 logs.go:276] 1 containers: [fbff0b09b3af]
	I0729 10:55:17.429355    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:55:17.440511    8358 logs.go:276] 2 containers: [8ad4866a16b3 a9a637b09ebc]
	I0729 10:55:17.440585    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:55:17.451053    8358 logs.go:276] 1 containers: [cc82106fc9da]
	I0729 10:55:17.451123    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:55:17.461652    8358 logs.go:276] 2 containers: [eee01c406f30 81df750d149b]
	I0729 10:55:17.461724    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:55:17.472023    8358 logs.go:276] 0 containers: []
	W0729 10:55:17.472034    8358 logs.go:278] No container was found matching "kindnet"
	I0729 10:55:17.472097    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:55:17.483144    8358 logs.go:276] 2 containers: [7610cd881aeb e2e7fb6e4b2d]
	I0729 10:55:17.483161    8358 logs.go:123] Gathering logs for dmesg ...
	I0729 10:55:17.483167    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:55:17.488160    8358 logs.go:123] Gathering logs for etcd [4494551802a6] ...
	I0729 10:55:17.488167    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4494551802a6"
	I0729 10:55:17.502671    8358 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:55:17.502681    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:55:17.538117    8358 logs.go:123] Gathering logs for kube-apiserver [14a4ff9d95a0] ...
	I0729 10:55:17.538128    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14a4ff9d95a0"
	I0729 10:55:17.561744    8358 logs.go:123] Gathering logs for kube-apiserver [2afc138a6e36] ...
	I0729 10:55:17.561753    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2afc138a6e36"
	I0729 10:55:17.599249    8358 logs.go:123] Gathering logs for storage-provisioner [e2e7fb6e4b2d] ...
	I0729 10:55:17.599259    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2e7fb6e4b2d"
	I0729 10:55:17.613585    8358 logs.go:123] Gathering logs for container status ...
	I0729 10:55:17.613596    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:55:17.625251    8358 logs.go:123] Gathering logs for etcd [68a6d48feaae] ...
	I0729 10:55:17.625262    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68a6d48feaae"
	I0729 10:55:17.644084    8358 logs.go:123] Gathering logs for coredns [fbff0b09b3af] ...
	I0729 10:55:17.644096    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff0b09b3af"
	I0729 10:55:17.655385    8358 logs.go:123] Gathering logs for kube-scheduler [8ad4866a16b3] ...
	I0729 10:55:17.655397    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ad4866a16b3"
	I0729 10:55:17.667243    8358 logs.go:123] Gathering logs for kube-proxy [cc82106fc9da] ...
	I0729 10:55:17.667254    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc82106fc9da"
	I0729 10:55:17.682096    8358 logs.go:123] Gathering logs for kube-controller-manager [81df750d149b] ...
	I0729 10:55:17.682106    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81df750d149b"
	I0729 10:55:17.698494    8358 logs.go:123] Gathering logs for storage-provisioner [7610cd881aeb] ...
	I0729 10:55:17.698508    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7610cd881aeb"
	I0729 10:55:17.710334    8358 logs.go:123] Gathering logs for kubelet ...
	I0729 10:55:17.710350    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:55:17.747695    8358 logs.go:123] Gathering logs for kube-scheduler [a9a637b09ebc] ...
	I0729 10:55:17.747709    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9a637b09ebc"
	I0729 10:55:17.765007    8358 logs.go:123] Gathering logs for kube-controller-manager [eee01c406f30] ...
	I0729 10:55:17.765021    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eee01c406f30"
	I0729 10:55:17.782907    8358 logs.go:123] Gathering logs for Docker ...
	I0729 10:55:17.782917    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:55:17.269920    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:55:20.309268    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:55:22.272214    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:55:22.272428    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:55:22.300028    8229 logs.go:276] 1 containers: [120daa333441]
	I0729 10:55:22.300130    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:55:22.315112    8229 logs.go:276] 1 containers: [6cfb0c541a62]
	I0729 10:55:22.315192    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:55:22.328001    8229 logs.go:276] 4 containers: [571220e0392b 19d652647dcb f179b7a6916f 74a37cb60d42]
	I0729 10:55:22.328076    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:55:22.339302    8229 logs.go:276] 1 containers: [60be90d0d8ea]
	I0729 10:55:22.339374    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:55:22.350754    8229 logs.go:276] 1 containers: [5a4490e00797]
	I0729 10:55:22.350826    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:55:22.362379    8229 logs.go:276] 1 containers: [a047283c1326]
	I0729 10:55:22.362446    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:55:22.373093    8229 logs.go:276] 0 containers: []
	W0729 10:55:22.373104    8229 logs.go:278] No container was found matching "kindnet"
	I0729 10:55:22.373165    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:55:22.383436    8229 logs.go:276] 1 containers: [585ed2b764f6]
	I0729 10:55:22.383450    8229 logs.go:123] Gathering logs for coredns [571220e0392b] ...
	I0729 10:55:22.383455    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571220e0392b"
	I0729 10:55:22.394808    8229 logs.go:123] Gathering logs for coredns [74a37cb60d42] ...
	I0729 10:55:22.394818    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74a37cb60d42"
	I0729 10:55:22.406482    8229 logs.go:123] Gathering logs for kube-proxy [5a4490e00797] ...
	I0729 10:55:22.406492    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a4490e00797"
	I0729 10:55:22.418188    8229 logs.go:123] Gathering logs for container status ...
	I0729 10:55:22.418198    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:55:22.430158    8229 logs.go:123] Gathering logs for kubelet ...
	I0729 10:55:22.430171    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:55:22.465565    8229 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:55:22.465575    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:55:22.499649    8229 logs.go:123] Gathering logs for coredns [f179b7a6916f] ...
	I0729 10:55:22.499660    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f179b7a6916f"
	I0729 10:55:22.511892    8229 logs.go:123] Gathering logs for Docker ...
	I0729 10:55:22.511903    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:55:22.537501    8229 logs.go:123] Gathering logs for dmesg ...
	I0729 10:55:22.537510    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:55:22.541938    8229 logs.go:123] Gathering logs for kube-apiserver [120daa333441] ...
	I0729 10:55:22.541944    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 120daa333441"
	I0729 10:55:22.556840    8229 logs.go:123] Gathering logs for coredns [19d652647dcb] ...
	I0729 10:55:22.556849    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19d652647dcb"
	I0729 10:55:22.568386    8229 logs.go:123] Gathering logs for etcd [6cfb0c541a62] ...
	I0729 10:55:22.568397    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cfb0c541a62"
	I0729 10:55:22.582646    8229 logs.go:123] Gathering logs for kube-scheduler [60be90d0d8ea] ...
	I0729 10:55:22.582654    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60be90d0d8ea"
	I0729 10:55:22.597002    8229 logs.go:123] Gathering logs for kube-controller-manager [a047283c1326] ...
	I0729 10:55:22.597017    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a047283c1326"
	I0729 10:55:22.617115    8229 logs.go:123] Gathering logs for storage-provisioner [585ed2b764f6] ...
	I0729 10:55:22.617125    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585ed2b764f6"
	I0729 10:55:25.129275    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:55:25.311550    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:55:25.311699    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:55:25.325178    8358 logs.go:276] 2 containers: [14a4ff9d95a0 2afc138a6e36]
	I0729 10:55:25.325262    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:55:25.336299    8358 logs.go:276] 2 containers: [68a6d48feaae 4494551802a6]
	I0729 10:55:25.336405    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:55:25.347616    8358 logs.go:276] 1 containers: [fbff0b09b3af]
	I0729 10:55:25.347694    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:55:25.359729    8358 logs.go:276] 2 containers: [8ad4866a16b3 a9a637b09ebc]
	I0729 10:55:25.359805    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:55:25.370591    8358 logs.go:276] 1 containers: [cc82106fc9da]
	I0729 10:55:25.370658    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:55:25.381019    8358 logs.go:276] 2 containers: [eee01c406f30 81df750d149b]
	I0729 10:55:25.381092    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:55:25.390732    8358 logs.go:276] 0 containers: []
	W0729 10:55:25.390743    8358 logs.go:278] No container was found matching "kindnet"
	I0729 10:55:25.390804    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:55:25.405410    8358 logs.go:276] 2 containers: [7610cd881aeb e2e7fb6e4b2d]
	I0729 10:55:25.405427    8358 logs.go:123] Gathering logs for kube-controller-manager [81df750d149b] ...
	I0729 10:55:25.405432    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81df750d149b"
	I0729 10:55:25.418064    8358 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:55:25.418078    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:55:25.452928    8358 logs.go:123] Gathering logs for kube-apiserver [14a4ff9d95a0] ...
	I0729 10:55:25.452945    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14a4ff9d95a0"
	I0729 10:55:25.467018    8358 logs.go:123] Gathering logs for etcd [68a6d48feaae] ...
	I0729 10:55:25.467032    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68a6d48feaae"
	I0729 10:55:25.481796    8358 logs.go:123] Gathering logs for kube-scheduler [a9a637b09ebc] ...
	I0729 10:55:25.481809    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9a637b09ebc"
	I0729 10:55:25.496982    8358 logs.go:123] Gathering logs for kube-proxy [cc82106fc9da] ...
	I0729 10:55:25.496992    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc82106fc9da"
	I0729 10:55:25.508642    8358 logs.go:123] Gathering logs for container status ...
	I0729 10:55:25.508653    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:55:25.524556    8358 logs.go:123] Gathering logs for dmesg ...
	I0729 10:55:25.524574    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:55:25.529539    8358 logs.go:123] Gathering logs for kube-apiserver [2afc138a6e36] ...
	I0729 10:55:25.529549    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2afc138a6e36"
	I0729 10:55:25.566852    8358 logs.go:123] Gathering logs for kube-scheduler [8ad4866a16b3] ...
	I0729 10:55:25.566866    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ad4866a16b3"
	I0729 10:55:25.582933    8358 logs.go:123] Gathering logs for kube-controller-manager [eee01c406f30] ...
	I0729 10:55:25.582947    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eee01c406f30"
	I0729 10:55:25.600096    8358 logs.go:123] Gathering logs for storage-provisioner [e2e7fb6e4b2d] ...
	I0729 10:55:25.600109    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2e7fb6e4b2d"
	I0729 10:55:25.612736    8358 logs.go:123] Gathering logs for kubelet ...
	I0729 10:55:25.612748    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:55:25.649080    8358 logs.go:123] Gathering logs for etcd [4494551802a6] ...
	I0729 10:55:25.649092    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4494551802a6"
	I0729 10:55:25.664197    8358 logs.go:123] Gathering logs for coredns [fbff0b09b3af] ...
	I0729 10:55:25.664210    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff0b09b3af"
	I0729 10:55:25.675809    8358 logs.go:123] Gathering logs for storage-provisioner [7610cd881aeb] ...
	I0729 10:55:25.675820    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7610cd881aeb"
	I0729 10:55:25.687875    8358 logs.go:123] Gathering logs for Docker ...
	I0729 10:55:25.687890    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:55:30.131441    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:55:30.131668    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:55:30.152052    8229 logs.go:276] 1 containers: [120daa333441]
	I0729 10:55:30.152166    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:55:30.167040    8229 logs.go:276] 1 containers: [6cfb0c541a62]
	I0729 10:55:30.167116    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:55:30.179471    8229 logs.go:276] 4 containers: [571220e0392b 19d652647dcb f179b7a6916f 74a37cb60d42]
	I0729 10:55:30.179544    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:55:30.190219    8229 logs.go:276] 1 containers: [60be90d0d8ea]
	I0729 10:55:30.190285    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:55:30.200805    8229 logs.go:276] 1 containers: [5a4490e00797]
	I0729 10:55:30.200875    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:55:30.211144    8229 logs.go:276] 1 containers: [a047283c1326]
	I0729 10:55:30.211211    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:55:30.221246    8229 logs.go:276] 0 containers: []
	W0729 10:55:30.221257    8229 logs.go:278] No container was found matching "kindnet"
	I0729 10:55:30.221319    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:55:30.232137    8229 logs.go:276] 1 containers: [585ed2b764f6]
	I0729 10:55:30.232155    8229 logs.go:123] Gathering logs for etcd [6cfb0c541a62] ...
	I0729 10:55:30.232160    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cfb0c541a62"
	I0729 10:55:30.250757    8229 logs.go:123] Gathering logs for coredns [19d652647dcb] ...
	I0729 10:55:30.250767    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19d652647dcb"
	I0729 10:55:30.262699    8229 logs.go:123] Gathering logs for kube-scheduler [60be90d0d8ea] ...
	I0729 10:55:30.262710    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60be90d0d8ea"
	I0729 10:55:30.276948    8229 logs.go:123] Gathering logs for kube-proxy [5a4490e00797] ...
	I0729 10:55:30.276959    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a4490e00797"
	I0729 10:55:30.288405    8229 logs.go:123] Gathering logs for storage-provisioner [585ed2b764f6] ...
	I0729 10:55:30.288416    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585ed2b764f6"
	I0729 10:55:30.300358    8229 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:55:30.300367    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:55:30.340091    8229 logs.go:123] Gathering logs for coredns [571220e0392b] ...
	I0729 10:55:30.340104    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571220e0392b"
	I0729 10:55:30.352316    8229 logs.go:123] Gathering logs for Docker ...
	I0729 10:55:30.352327    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:55:30.375990    8229 logs.go:123] Gathering logs for dmesg ...
	I0729 10:55:30.375999    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:55:30.380368    8229 logs.go:123] Gathering logs for kube-apiserver [120daa333441] ...
	I0729 10:55:30.380374    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 120daa333441"
	I0729 10:55:30.395486    8229 logs.go:123] Gathering logs for coredns [f179b7a6916f] ...
	I0729 10:55:30.395497    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f179b7a6916f"
	I0729 10:55:30.408473    8229 logs.go:123] Gathering logs for coredns [74a37cb60d42] ...
	I0729 10:55:30.408486    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74a37cb60d42"
	I0729 10:55:30.419568    8229 logs.go:123] Gathering logs for container status ...
	I0729 10:55:30.419578    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:55:30.433466    8229 logs.go:123] Gathering logs for kubelet ...
	I0729 10:55:30.433477    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:55:30.469597    8229 logs.go:123] Gathering logs for kube-controller-manager [a047283c1326] ...
	I0729 10:55:30.469606    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a047283c1326"
	I0729 10:55:28.212442    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:55:32.989183    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:55:33.214431    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:55:33.214630    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:55:33.240185    8358 logs.go:276] 2 containers: [14a4ff9d95a0 2afc138a6e36]
	I0729 10:55:33.240304    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:55:33.257131    8358 logs.go:276] 2 containers: [68a6d48feaae 4494551802a6]
	I0729 10:55:33.257227    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:55:33.272497    8358 logs.go:276] 1 containers: [fbff0b09b3af]
	I0729 10:55:33.272573    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:55:33.287544    8358 logs.go:276] 2 containers: [8ad4866a16b3 a9a637b09ebc]
	I0729 10:55:33.287608    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:55:33.297467    8358 logs.go:276] 1 containers: [cc82106fc9da]
	I0729 10:55:33.297537    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:55:33.307613    8358 logs.go:276] 2 containers: [eee01c406f30 81df750d149b]
	I0729 10:55:33.307679    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:55:33.317605    8358 logs.go:276] 0 containers: []
	W0729 10:55:33.317617    8358 logs.go:278] No container was found matching "kindnet"
	I0729 10:55:33.317675    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:55:33.328223    8358 logs.go:276] 2 containers: [7610cd881aeb e2e7fb6e4b2d]
	I0729 10:55:33.328243    8358 logs.go:123] Gathering logs for coredns [fbff0b09b3af] ...
	I0729 10:55:33.328248    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff0b09b3af"
	I0729 10:55:33.340059    8358 logs.go:123] Gathering logs for kube-scheduler [a9a637b09ebc] ...
	I0729 10:55:33.340072    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9a637b09ebc"
	I0729 10:55:33.355567    8358 logs.go:123] Gathering logs for kubelet ...
	I0729 10:55:33.355578    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:55:33.394451    8358 logs.go:123] Gathering logs for etcd [4494551802a6] ...
	I0729 10:55:33.394460    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4494551802a6"
	I0729 10:55:33.409633    8358 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:55:33.409643    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:55:33.444004    8358 logs.go:123] Gathering logs for kube-controller-manager [81df750d149b] ...
	I0729 10:55:33.444016    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81df750d149b"
	I0729 10:55:33.467334    8358 logs.go:123] Gathering logs for storage-provisioner [7610cd881aeb] ...
	I0729 10:55:33.467347    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7610cd881aeb"
	I0729 10:55:33.479562    8358 logs.go:123] Gathering logs for storage-provisioner [e2e7fb6e4b2d] ...
	I0729 10:55:33.479576    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2e7fb6e4b2d"
	I0729 10:55:33.490855    8358 logs.go:123] Gathering logs for Docker ...
	I0729 10:55:33.490867    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:55:33.514865    8358 logs.go:123] Gathering logs for container status ...
	I0729 10:55:33.514872    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:55:33.527332    8358 logs.go:123] Gathering logs for etcd [68a6d48feaae] ...
	I0729 10:55:33.527344    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68a6d48feaae"
	I0729 10:55:33.541846    8358 logs.go:123] Gathering logs for kube-scheduler [8ad4866a16b3] ...
	I0729 10:55:33.541857    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ad4866a16b3"
	I0729 10:55:33.555229    8358 logs.go:123] Gathering logs for kube-apiserver [2afc138a6e36] ...
	I0729 10:55:33.555244    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2afc138a6e36"
	I0729 10:55:33.594341    8358 logs.go:123] Gathering logs for kube-proxy [cc82106fc9da] ...
	I0729 10:55:33.594353    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc82106fc9da"
	I0729 10:55:33.605852    8358 logs.go:123] Gathering logs for kube-controller-manager [eee01c406f30] ...
	I0729 10:55:33.605868    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eee01c406f30"
	I0729 10:55:33.622840    8358 logs.go:123] Gathering logs for dmesg ...
	I0729 10:55:33.622855    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:55:33.627553    8358 logs.go:123] Gathering logs for kube-apiserver [14a4ff9d95a0] ...
	I0729 10:55:33.627560    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14a4ff9d95a0"
	I0729 10:55:36.143527    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:55:37.991429    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:55:37.991555    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:55:38.011232    8229 logs.go:276] 1 containers: [120daa333441]
	I0729 10:55:38.011310    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:55:38.022378    8229 logs.go:276] 1 containers: [6cfb0c541a62]
	I0729 10:55:38.022443    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:55:38.032970    8229 logs.go:276] 4 containers: [571220e0392b 19d652647dcb f179b7a6916f 74a37cb60d42]
	I0729 10:55:38.033047    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:55:38.043451    8229 logs.go:276] 1 containers: [60be90d0d8ea]
	I0729 10:55:38.043520    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:55:38.053483    8229 logs.go:276] 1 containers: [5a4490e00797]
	I0729 10:55:38.053545    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:55:38.065355    8229 logs.go:276] 1 containers: [a047283c1326]
	I0729 10:55:38.065428    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:55:38.075583    8229 logs.go:276] 0 containers: []
	W0729 10:55:38.075596    8229 logs.go:278] No container was found matching "kindnet"
	I0729 10:55:38.075651    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:55:38.086091    8229 logs.go:276] 1 containers: [585ed2b764f6]
	I0729 10:55:38.086109    8229 logs.go:123] Gathering logs for kubelet ...
	I0729 10:55:38.086115    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:55:38.122224    8229 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:55:38.122235    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:55:38.157232    8229 logs.go:123] Gathering logs for kube-apiserver [120daa333441] ...
	I0729 10:55:38.157243    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 120daa333441"
	I0729 10:55:38.172166    8229 logs.go:123] Gathering logs for kube-scheduler [60be90d0d8ea] ...
	I0729 10:55:38.172179    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60be90d0d8ea"
	I0729 10:55:38.189915    8229 logs.go:123] Gathering logs for kube-controller-manager [a047283c1326] ...
	I0729 10:55:38.189925    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a047283c1326"
	I0729 10:55:38.207493    8229 logs.go:123] Gathering logs for coredns [f179b7a6916f] ...
	I0729 10:55:38.207503    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f179b7a6916f"
	I0729 10:55:38.218881    8229 logs.go:123] Gathering logs for Docker ...
	I0729 10:55:38.218890    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:55:38.244812    8229 logs.go:123] Gathering logs for container status ...
	I0729 10:55:38.244828    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:55:38.256557    8229 logs.go:123] Gathering logs for dmesg ...
	I0729 10:55:38.256571    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:55:38.260889    8229 logs.go:123] Gathering logs for etcd [6cfb0c541a62] ...
	I0729 10:55:38.260899    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cfb0c541a62"
	I0729 10:55:38.274597    8229 logs.go:123] Gathering logs for coredns [571220e0392b] ...
	I0729 10:55:38.274606    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571220e0392b"
	I0729 10:55:38.286169    8229 logs.go:123] Gathering logs for coredns [19d652647dcb] ...
	I0729 10:55:38.286179    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19d652647dcb"
	I0729 10:55:38.297552    8229 logs.go:123] Gathering logs for coredns [74a37cb60d42] ...
	I0729 10:55:38.297566    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74a37cb60d42"
	I0729 10:55:38.309096    8229 logs.go:123] Gathering logs for kube-proxy [5a4490e00797] ...
	I0729 10:55:38.309106    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a4490e00797"
	I0729 10:55:38.322386    8229 logs.go:123] Gathering logs for storage-provisioner [585ed2b764f6] ...
	I0729 10:55:38.322396    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585ed2b764f6"
	I0729 10:55:41.144804    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:55:41.144963    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:55:41.160847    8358 logs.go:276] 2 containers: [14a4ff9d95a0 2afc138a6e36]
	I0729 10:55:41.160936    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:55:41.172861    8358 logs.go:276] 2 containers: [68a6d48feaae 4494551802a6]
	I0729 10:55:41.172942    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:55:41.183243    8358 logs.go:276] 1 containers: [fbff0b09b3af]
	I0729 10:55:41.183307    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:55:41.201744    8358 logs.go:276] 2 containers: [8ad4866a16b3 a9a637b09ebc]
	I0729 10:55:41.201821    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:55:41.212244    8358 logs.go:276] 1 containers: [cc82106fc9da]
	I0729 10:55:41.212313    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:55:41.223604    8358 logs.go:276] 2 containers: [eee01c406f30 81df750d149b]
	I0729 10:55:41.223671    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:55:41.241229    8358 logs.go:276] 0 containers: []
	W0729 10:55:41.241241    8358 logs.go:278] No container was found matching "kindnet"
	I0729 10:55:41.241303    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:55:41.252329    8358 logs.go:276] 2 containers: [7610cd881aeb e2e7fb6e4b2d]
	I0729 10:55:41.252347    8358 logs.go:123] Gathering logs for coredns [fbff0b09b3af] ...
	I0729 10:55:41.252354    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff0b09b3af"
	I0729 10:55:41.263723    8358 logs.go:123] Gathering logs for kube-scheduler [8ad4866a16b3] ...
	I0729 10:55:41.263736    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ad4866a16b3"
	I0729 10:55:41.275544    8358 logs.go:123] Gathering logs for kube-controller-manager [81df750d149b] ...
	I0729 10:55:41.275554    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81df750d149b"
	I0729 10:55:41.289154    8358 logs.go:123] Gathering logs for etcd [68a6d48feaae] ...
	I0729 10:55:41.289165    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68a6d48feaae"
	I0729 10:55:41.302838    8358 logs.go:123] Gathering logs for kube-apiserver [14a4ff9d95a0] ...
	I0729 10:55:41.302847    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14a4ff9d95a0"
	I0729 10:55:41.316543    8358 logs.go:123] Gathering logs for kube-apiserver [2afc138a6e36] ...
	I0729 10:55:41.316554    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2afc138a6e36"
	I0729 10:55:41.353666    8358 logs.go:123] Gathering logs for kube-scheduler [a9a637b09ebc] ...
	I0729 10:55:41.353676    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9a637b09ebc"
	I0729 10:55:41.369043    8358 logs.go:123] Gathering logs for kube-controller-manager [eee01c406f30] ...
	I0729 10:55:41.369054    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eee01c406f30"
	I0729 10:55:41.386432    8358 logs.go:123] Gathering logs for storage-provisioner [7610cd881aeb] ...
	I0729 10:55:41.386442    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7610cd881aeb"
	I0729 10:55:41.398286    8358 logs.go:123] Gathering logs for container status ...
	I0729 10:55:41.398298    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:55:41.410564    8358 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:55:41.410576    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:55:41.447487    8358 logs.go:123] Gathering logs for storage-provisioner [e2e7fb6e4b2d] ...
	I0729 10:55:41.447499    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2e7fb6e4b2d"
	I0729 10:55:41.460380    8358 logs.go:123] Gathering logs for dmesg ...
	I0729 10:55:41.460392    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:55:41.464533    8358 logs.go:123] Gathering logs for etcd [4494551802a6] ...
	I0729 10:55:41.464540    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4494551802a6"
	I0729 10:55:41.479245    8358 logs.go:123] Gathering logs for kube-proxy [cc82106fc9da] ...
	I0729 10:55:41.479256    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc82106fc9da"
	I0729 10:55:41.491443    8358 logs.go:123] Gathering logs for Docker ...
	I0729 10:55:41.491454    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:55:41.513954    8358 logs.go:123] Gathering logs for kubelet ...
	I0729 10:55:41.513960    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:55:40.836371    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:55:44.052346    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:55:45.838621    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:55:45.838838    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:55:45.856712    8229 logs.go:276] 1 containers: [120daa333441]
	I0729 10:55:45.856786    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:55:45.878722    8229 logs.go:276] 1 containers: [6cfb0c541a62]
	I0729 10:55:45.878800    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:55:45.889873    8229 logs.go:276] 4 containers: [571220e0392b 19d652647dcb f179b7a6916f 74a37cb60d42]
	I0729 10:55:45.889953    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:55:45.900250    8229 logs.go:276] 1 containers: [60be90d0d8ea]
	I0729 10:55:45.900314    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:55:45.910323    8229 logs.go:276] 1 containers: [5a4490e00797]
	I0729 10:55:45.910393    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:55:45.920784    8229 logs.go:276] 1 containers: [a047283c1326]
	I0729 10:55:45.920851    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:55:45.931358    8229 logs.go:276] 0 containers: []
	W0729 10:55:45.931372    8229 logs.go:278] No container was found matching "kindnet"
	I0729 10:55:45.931427    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:55:45.941775    8229 logs.go:276] 1 containers: [585ed2b764f6]
	I0729 10:55:45.941794    8229 logs.go:123] Gathering logs for coredns [571220e0392b] ...
	I0729 10:55:45.941800    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571220e0392b"
	I0729 10:55:45.953773    8229 logs.go:123] Gathering logs for dmesg ...
	I0729 10:55:45.953785    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:55:45.959737    8229 logs.go:123] Gathering logs for Docker ...
	I0729 10:55:45.959747    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:55:45.984884    8229 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:55:45.984893    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:55:46.019391    8229 logs.go:123] Gathering logs for coredns [19d652647dcb] ...
	I0729 10:55:46.019400    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19d652647dcb"
	I0729 10:55:46.031653    8229 logs.go:123] Gathering logs for coredns [f179b7a6916f] ...
	I0729 10:55:46.031666    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f179b7a6916f"
	I0729 10:55:46.047406    8229 logs.go:123] Gathering logs for coredns [74a37cb60d42] ...
	I0729 10:55:46.047420    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74a37cb60d42"
	I0729 10:55:46.059787    8229 logs.go:123] Gathering logs for kube-scheduler [60be90d0d8ea] ...
	I0729 10:55:46.059798    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60be90d0d8ea"
	I0729 10:55:46.074273    8229 logs.go:123] Gathering logs for kube-proxy [5a4490e00797] ...
	I0729 10:55:46.074285    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a4490e00797"
	I0729 10:55:46.086778    8229 logs.go:123] Gathering logs for kube-controller-manager [a047283c1326] ...
	I0729 10:55:46.086790    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a047283c1326"
	I0729 10:55:46.111265    8229 logs.go:123] Gathering logs for kubelet ...
	I0729 10:55:46.111275    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:55:46.146843    8229 logs.go:123] Gathering logs for kube-apiserver [120daa333441] ...
	I0729 10:55:46.146854    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 120daa333441"
	I0729 10:55:46.161894    8229 logs.go:123] Gathering logs for etcd [6cfb0c541a62] ...
	I0729 10:55:46.161906    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cfb0c541a62"
	I0729 10:55:46.180196    8229 logs.go:123] Gathering logs for storage-provisioner [585ed2b764f6] ...
	I0729 10:55:46.180209    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585ed2b764f6"
	I0729 10:55:46.192465    8229 logs.go:123] Gathering logs for container status ...
	I0729 10:55:46.192479    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:55:48.706846    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:55:49.054590    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:55:49.054681    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:55:49.067856    8358 logs.go:276] 2 containers: [14a4ff9d95a0 2afc138a6e36]
	I0729 10:55:49.067932    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:55:49.078944    8358 logs.go:276] 2 containers: [68a6d48feaae 4494551802a6]
	I0729 10:55:49.079015    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:55:49.090914    8358 logs.go:276] 1 containers: [fbff0b09b3af]
	I0729 10:55:49.090991    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:55:49.101562    8358 logs.go:276] 2 containers: [8ad4866a16b3 a9a637b09ebc]
	I0729 10:55:49.101651    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:55:49.112357    8358 logs.go:276] 1 containers: [cc82106fc9da]
	I0729 10:55:49.112429    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:55:49.123272    8358 logs.go:276] 2 containers: [eee01c406f30 81df750d149b]
	I0729 10:55:49.123339    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:55:49.134011    8358 logs.go:276] 0 containers: []
	W0729 10:55:49.134026    8358 logs.go:278] No container was found matching "kindnet"
	I0729 10:55:49.134081    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:55:49.144585    8358 logs.go:276] 2 containers: [7610cd881aeb e2e7fb6e4b2d]
	I0729 10:55:49.144607    8358 logs.go:123] Gathering logs for etcd [68a6d48feaae] ...
	I0729 10:55:49.144613    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68a6d48feaae"
	I0729 10:55:49.158420    8358 logs.go:123] Gathering logs for kube-scheduler [8ad4866a16b3] ...
	I0729 10:55:49.158431    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ad4866a16b3"
	I0729 10:55:49.170247    8358 logs.go:123] Gathering logs for kube-proxy [cc82106fc9da] ...
	I0729 10:55:49.170258    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc82106fc9da"
	I0729 10:55:49.181572    8358 logs.go:123] Gathering logs for storage-provisioner [e2e7fb6e4b2d] ...
	I0729 10:55:49.181581    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2e7fb6e4b2d"
	I0729 10:55:49.192393    8358 logs.go:123] Gathering logs for Docker ...
	I0729 10:55:49.192404    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:55:49.215962    8358 logs.go:123] Gathering logs for dmesg ...
	I0729 10:55:49.215973    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:55:49.220212    8358 logs.go:123] Gathering logs for kube-apiserver [14a4ff9d95a0] ...
	I0729 10:55:49.220218    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14a4ff9d95a0"
	I0729 10:55:49.234425    8358 logs.go:123] Gathering logs for kube-apiserver [2afc138a6e36] ...
	I0729 10:55:49.234440    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2afc138a6e36"
	I0729 10:55:49.272991    8358 logs.go:123] Gathering logs for etcd [4494551802a6] ...
	I0729 10:55:49.273003    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4494551802a6"
	I0729 10:55:49.288317    8358 logs.go:123] Gathering logs for kube-controller-manager [81df750d149b] ...
	I0729 10:55:49.288328    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81df750d149b"
	I0729 10:55:49.301917    8358 logs.go:123] Gathering logs for storage-provisioner [7610cd881aeb] ...
	I0729 10:55:49.301928    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7610cd881aeb"
	I0729 10:55:49.313743    8358 logs.go:123] Gathering logs for kubelet ...
	I0729 10:55:49.313755    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:55:49.353055    8358 logs.go:123] Gathering logs for coredns [fbff0b09b3af] ...
	I0729 10:55:49.353066    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff0b09b3af"
	I0729 10:55:49.364742    8358 logs.go:123] Gathering logs for container status ...
	I0729 10:55:49.364754    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:55:49.376798    8358 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:55:49.376810    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:55:49.410638    8358 logs.go:123] Gathering logs for kube-scheduler [a9a637b09ebc] ...
	I0729 10:55:49.410649    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9a637b09ebc"
	I0729 10:55:49.426545    8358 logs.go:123] Gathering logs for kube-controller-manager [eee01c406f30] ...
	I0729 10:55:49.426558    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eee01c406f30"
	I0729 10:55:51.946399    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:55:53.709247    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:55:53.709660    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:55:53.749775    8229 logs.go:276] 1 containers: [120daa333441]
	I0729 10:55:53.749914    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:55:53.769554    8229 logs.go:276] 1 containers: [6cfb0c541a62]
	I0729 10:55:53.769653    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:55:53.783852    8229 logs.go:276] 4 containers: [571220e0392b 19d652647dcb f179b7a6916f 74a37cb60d42]
	I0729 10:55:53.783931    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:55:53.802930    8229 logs.go:276] 1 containers: [60be90d0d8ea]
	I0729 10:55:53.802993    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:55:53.814411    8229 logs.go:276] 1 containers: [5a4490e00797]
	I0729 10:55:53.814481    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:55:53.825297    8229 logs.go:276] 1 containers: [a047283c1326]
	I0729 10:55:53.825361    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:55:53.839735    8229 logs.go:276] 0 containers: []
	W0729 10:55:53.839745    8229 logs.go:278] No container was found matching "kindnet"
	I0729 10:55:53.839799    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:55:53.850761    8229 logs.go:276] 1 containers: [585ed2b764f6]
	I0729 10:55:53.850778    8229 logs.go:123] Gathering logs for kube-controller-manager [a047283c1326] ...
	I0729 10:55:53.850785    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a047283c1326"
	I0729 10:55:53.872444    8229 logs.go:123] Gathering logs for storage-provisioner [585ed2b764f6] ...
	I0729 10:55:53.872455    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585ed2b764f6"
	I0729 10:55:53.884656    8229 logs.go:123] Gathering logs for kubelet ...
	I0729 10:55:53.884666    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:55:53.922465    8229 logs.go:123] Gathering logs for coredns [571220e0392b] ...
	I0729 10:55:53.922480    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571220e0392b"
	I0729 10:55:53.945231    8229 logs.go:123] Gathering logs for coredns [19d652647dcb] ...
	I0729 10:55:53.945244    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19d652647dcb"
	I0729 10:55:53.957644    8229 logs.go:123] Gathering logs for coredns [74a37cb60d42] ...
	I0729 10:55:53.957666    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74a37cb60d42"
	I0729 10:55:53.970095    8229 logs.go:123] Gathering logs for kube-proxy [5a4490e00797] ...
	I0729 10:55:53.970108    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a4490e00797"
	I0729 10:55:53.982067    8229 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:55:53.982078    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:55:54.017581    8229 logs.go:123] Gathering logs for kube-scheduler [60be90d0d8ea] ...
	I0729 10:55:54.017591    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60be90d0d8ea"
	I0729 10:55:54.032267    8229 logs.go:123] Gathering logs for dmesg ...
	I0729 10:55:54.032281    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:55:54.036853    8229 logs.go:123] Gathering logs for kube-apiserver [120daa333441] ...
	I0729 10:55:54.036859    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 120daa333441"
	I0729 10:55:54.051830    8229 logs.go:123] Gathering logs for etcd [6cfb0c541a62] ...
	I0729 10:55:54.051841    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cfb0c541a62"
	I0729 10:55:54.066323    8229 logs.go:123] Gathering logs for coredns [f179b7a6916f] ...
	I0729 10:55:54.066339    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f179b7a6916f"
	I0729 10:55:54.078569    8229 logs.go:123] Gathering logs for Docker ...
	I0729 10:55:54.078584    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:55:54.102413    8229 logs.go:123] Gathering logs for container status ...
	I0729 10:55:54.102423    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:55:56.948619    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:55:56.948870    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:55:56.967357    8358 logs.go:276] 2 containers: [14a4ff9d95a0 2afc138a6e36]
	I0729 10:55:56.967439    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:55:56.981460    8358 logs.go:276] 2 containers: [68a6d48feaae 4494551802a6]
	I0729 10:55:56.981530    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:55:56.993104    8358 logs.go:276] 1 containers: [fbff0b09b3af]
	I0729 10:55:56.993168    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:55:57.007315    8358 logs.go:276] 2 containers: [8ad4866a16b3 a9a637b09ebc]
	I0729 10:55:57.007382    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:55:57.017905    8358 logs.go:276] 1 containers: [cc82106fc9da]
	I0729 10:55:57.017969    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:55:57.028643    8358 logs.go:276] 2 containers: [eee01c406f30 81df750d149b]
	I0729 10:55:57.028710    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:55:57.039098    8358 logs.go:276] 0 containers: []
	W0729 10:55:57.039111    8358 logs.go:278] No container was found matching "kindnet"
	I0729 10:55:57.039166    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:55:57.049734    8358 logs.go:276] 2 containers: [7610cd881aeb e2e7fb6e4b2d]
	I0729 10:55:57.049757    8358 logs.go:123] Gathering logs for kubelet ...
	I0729 10:55:57.049763    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:55:57.086129    8358 logs.go:123] Gathering logs for kube-proxy [cc82106fc9da] ...
	I0729 10:55:57.086138    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc82106fc9da"
	I0729 10:55:57.097784    8358 logs.go:123] Gathering logs for kube-controller-manager [eee01c406f30] ...
	I0729 10:55:57.097794    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eee01c406f30"
	I0729 10:55:57.114704    8358 logs.go:123] Gathering logs for etcd [68a6d48feaae] ...
	I0729 10:55:57.114713    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68a6d48feaae"
	I0729 10:55:57.128511    8358 logs.go:123] Gathering logs for etcd [4494551802a6] ...
	I0729 10:55:57.128521    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4494551802a6"
	I0729 10:55:57.143223    8358 logs.go:123] Gathering logs for storage-provisioner [7610cd881aeb] ...
	I0729 10:55:57.143233    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7610cd881aeb"
	I0729 10:55:57.154730    8358 logs.go:123] Gathering logs for storage-provisioner [e2e7fb6e4b2d] ...
	I0729 10:55:57.154740    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2e7fb6e4b2d"
	I0729 10:55:57.169262    8358 logs.go:123] Gathering logs for container status ...
	I0729 10:55:57.169276    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:55:57.180880    8358 logs.go:123] Gathering logs for dmesg ...
	I0729 10:55:57.180889    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:55:57.185015    8358 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:55:57.185023    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:55:57.218953    8358 logs.go:123] Gathering logs for coredns [fbff0b09b3af] ...
	I0729 10:55:57.218963    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff0b09b3af"
	I0729 10:55:57.230901    8358 logs.go:123] Gathering logs for kube-scheduler [8ad4866a16b3] ...
	I0729 10:55:57.230911    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ad4866a16b3"
	I0729 10:55:57.243003    8358 logs.go:123] Gathering logs for kube-controller-manager [81df750d149b] ...
	I0729 10:55:57.243016    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81df750d149b"
	I0729 10:55:57.256940    8358 logs.go:123] Gathering logs for Docker ...
	I0729 10:55:57.256949    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:55:57.278393    8358 logs.go:123] Gathering logs for kube-apiserver [14a4ff9d95a0] ...
	I0729 10:55:57.278400    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14a4ff9d95a0"
	I0729 10:55:57.292790    8358 logs.go:123] Gathering logs for kube-apiserver [2afc138a6e36] ...
	I0729 10:55:57.292806    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2afc138a6e36"
	I0729 10:55:57.331626    8358 logs.go:123] Gathering logs for kube-scheduler [a9a637b09ebc] ...
	I0729 10:55:57.331637    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9a637b09ebc"
	I0729 10:55:56.616118    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:55:59.849274    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:56:01.617863    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:56:01.618017    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:56:01.636055    8229 logs.go:276] 1 containers: [120daa333441]
	I0729 10:56:01.636132    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:56:01.649430    8229 logs.go:276] 1 containers: [6cfb0c541a62]
	I0729 10:56:01.649517    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:56:01.661209    8229 logs.go:276] 4 containers: [571220e0392b 19d652647dcb f179b7a6916f 74a37cb60d42]
	I0729 10:56:01.661277    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:56:01.671799    8229 logs.go:276] 1 containers: [60be90d0d8ea]
	I0729 10:56:01.671860    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:56:01.684276    8229 logs.go:276] 1 containers: [5a4490e00797]
	I0729 10:56:01.684331    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:56:01.697487    8229 logs.go:276] 1 containers: [a047283c1326]
	I0729 10:56:01.697552    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:56:01.708435    8229 logs.go:276] 0 containers: []
	W0729 10:56:01.708447    8229 logs.go:278] No container was found matching "kindnet"
	I0729 10:56:01.708498    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:56:01.718980    8229 logs.go:276] 1 containers: [585ed2b764f6]
	I0729 10:56:01.718998    8229 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:56:01.719003    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:56:01.758296    8229 logs.go:123] Gathering logs for kube-apiserver [120daa333441] ...
	I0729 10:56:01.758310    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 120daa333441"
	I0729 10:56:01.772936    8229 logs.go:123] Gathering logs for kube-scheduler [60be90d0d8ea] ...
	I0729 10:56:01.772947    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60be90d0d8ea"
	I0729 10:56:01.795283    8229 logs.go:123] Gathering logs for container status ...
	I0729 10:56:01.795293    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:56:01.806698    8229 logs.go:123] Gathering logs for dmesg ...
	I0729 10:56:01.806711    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:56:01.811875    8229 logs.go:123] Gathering logs for kube-proxy [5a4490e00797] ...
	I0729 10:56:01.811883    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a4490e00797"
	I0729 10:56:01.824280    8229 logs.go:123] Gathering logs for storage-provisioner [585ed2b764f6] ...
	I0729 10:56:01.824291    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585ed2b764f6"
	I0729 10:56:01.836221    8229 logs.go:123] Gathering logs for Docker ...
	I0729 10:56:01.836234    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:56:01.859593    8229 logs.go:123] Gathering logs for kubelet ...
	I0729 10:56:01.859605    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:56:01.895228    8229 logs.go:123] Gathering logs for coredns [19d652647dcb] ...
	I0729 10:56:01.895235    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19d652647dcb"
	I0729 10:56:01.906507    8229 logs.go:123] Gathering logs for coredns [f179b7a6916f] ...
	I0729 10:56:01.906519    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f179b7a6916f"
	I0729 10:56:01.918298    8229 logs.go:123] Gathering logs for kube-controller-manager [a047283c1326] ...
	I0729 10:56:01.918308    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a047283c1326"
	I0729 10:56:01.935637    8229 logs.go:123] Gathering logs for etcd [6cfb0c541a62] ...
	I0729 10:56:01.935648    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cfb0c541a62"
	I0729 10:56:01.950483    8229 logs.go:123] Gathering logs for coredns [571220e0392b] ...
	I0729 10:56:01.950496    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571220e0392b"
	I0729 10:56:01.965870    8229 logs.go:123] Gathering logs for coredns [74a37cb60d42] ...
	I0729 10:56:01.965884    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74a37cb60d42"
	I0729 10:56:04.480653    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:56:04.851596    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:56:04.851802    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:56:04.870921    8358 logs.go:276] 2 containers: [14a4ff9d95a0 2afc138a6e36]
	I0729 10:56:04.871023    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:56:04.889626    8358 logs.go:276] 2 containers: [68a6d48feaae 4494551802a6]
	I0729 10:56:04.889699    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:56:04.901192    8358 logs.go:276] 1 containers: [fbff0b09b3af]
	I0729 10:56:04.901272    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:56:04.911799    8358 logs.go:276] 2 containers: [8ad4866a16b3 a9a637b09ebc]
	I0729 10:56:04.911885    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:56:04.922779    8358 logs.go:276] 1 containers: [cc82106fc9da]
	I0729 10:56:04.922846    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:56:04.933853    8358 logs.go:276] 2 containers: [eee01c406f30 81df750d149b]
	I0729 10:56:04.933922    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:56:04.944566    8358 logs.go:276] 0 containers: []
	W0729 10:56:04.944583    8358 logs.go:278] No container was found matching "kindnet"
	I0729 10:56:04.944651    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:56:04.958385    8358 logs.go:276] 2 containers: [7610cd881aeb e2e7fb6e4b2d]
	I0729 10:56:04.958403    8358 logs.go:123] Gathering logs for kube-scheduler [a9a637b09ebc] ...
	I0729 10:56:04.958408    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9a637b09ebc"
	I0729 10:56:04.974261    8358 logs.go:123] Gathering logs for kube-controller-manager [eee01c406f30] ...
	I0729 10:56:04.974273    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eee01c406f30"
	I0729 10:56:04.993875    8358 logs.go:123] Gathering logs for storage-provisioner [7610cd881aeb] ...
	I0729 10:56:04.993885    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7610cd881aeb"
	I0729 10:56:05.014633    8358 logs.go:123] Gathering logs for dmesg ...
	I0729 10:56:05.014644    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:56:05.020193    8358 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:56:05.020208    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:56:05.068433    8358 logs.go:123] Gathering logs for kube-apiserver [14a4ff9d95a0] ...
	I0729 10:56:05.068445    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14a4ff9d95a0"
	I0729 10:56:05.082983    8358 logs.go:123] Gathering logs for kube-apiserver [2afc138a6e36] ...
	I0729 10:56:05.082994    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2afc138a6e36"
	I0729 10:56:05.124545    8358 logs.go:123] Gathering logs for etcd [68a6d48feaae] ...
	I0729 10:56:05.124556    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68a6d48feaae"
	I0729 10:56:05.138683    8358 logs.go:123] Gathering logs for Docker ...
	I0729 10:56:05.138694    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:56:05.161767    8358 logs.go:123] Gathering logs for container status ...
	I0729 10:56:05.161775    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:56:05.175298    8358 logs.go:123] Gathering logs for etcd [4494551802a6] ...
	I0729 10:56:05.175309    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4494551802a6"
	I0729 10:56:05.189745    8358 logs.go:123] Gathering logs for kube-proxy [cc82106fc9da] ...
	I0729 10:56:05.189755    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc82106fc9da"
	I0729 10:56:05.202103    8358 logs.go:123] Gathering logs for storage-provisioner [e2e7fb6e4b2d] ...
	I0729 10:56:05.202114    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2e7fb6e4b2d"
	I0729 10:56:05.217509    8358 logs.go:123] Gathering logs for kubelet ...
	I0729 10:56:05.217521    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:56:05.257578    8358 logs.go:123] Gathering logs for coredns [fbff0b09b3af] ...
	I0729 10:56:05.257596    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff0b09b3af"
	I0729 10:56:05.269030    8358 logs.go:123] Gathering logs for kube-scheduler [8ad4866a16b3] ...
	I0729 10:56:05.269043    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ad4866a16b3"
	I0729 10:56:05.280898    8358 logs.go:123] Gathering logs for kube-controller-manager [81df750d149b] ...
	I0729 10:56:05.280910    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81df750d149b"
	I0729 10:56:07.796518    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:56:09.482695    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:56:09.482960    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:56:09.506137    8229 logs.go:276] 1 containers: [120daa333441]
	I0729 10:56:09.506262    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:56:09.523057    8229 logs.go:276] 1 containers: [6cfb0c541a62]
	I0729 10:56:09.523135    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:56:09.536004    8229 logs.go:276] 4 containers: [571220e0392b 19d652647dcb f179b7a6916f 74a37cb60d42]
	I0729 10:56:09.536083    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:56:09.549243    8229 logs.go:276] 1 containers: [60be90d0d8ea]
	I0729 10:56:09.549312    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:56:09.560657    8229 logs.go:276] 1 containers: [5a4490e00797]
	I0729 10:56:09.560728    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:56:09.571617    8229 logs.go:276] 1 containers: [a047283c1326]
	I0729 10:56:09.571687    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:56:09.582041    8229 logs.go:276] 0 containers: []
	W0729 10:56:09.582052    8229 logs.go:278] No container was found matching "kindnet"
	I0729 10:56:09.582116    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:56:09.593123    8229 logs.go:276] 1 containers: [585ed2b764f6]
	I0729 10:56:09.593139    8229 logs.go:123] Gathering logs for coredns [571220e0392b] ...
	I0729 10:56:09.593146    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571220e0392b"
	I0729 10:56:09.604622    8229 logs.go:123] Gathering logs for kube-controller-manager [a047283c1326] ...
	I0729 10:56:09.604635    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a047283c1326"
	I0729 10:56:09.622408    8229 logs.go:123] Gathering logs for kubelet ...
	I0729 10:56:09.622420    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:56:09.659865    8229 logs.go:123] Gathering logs for etcd [6cfb0c541a62] ...
	I0729 10:56:09.659873    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cfb0c541a62"
	I0729 10:56:09.673678    8229 logs.go:123] Gathering logs for coredns [19d652647dcb] ...
	I0729 10:56:09.673688    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19d652647dcb"
	I0729 10:56:09.687055    8229 logs.go:123] Gathering logs for coredns [f179b7a6916f] ...
	I0729 10:56:09.687070    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f179b7a6916f"
	I0729 10:56:09.699532    8229 logs.go:123] Gathering logs for storage-provisioner [585ed2b764f6] ...
	I0729 10:56:09.699547    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585ed2b764f6"
	I0729 10:56:09.713137    8229 logs.go:123] Gathering logs for Docker ...
	I0729 10:56:09.713148    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:56:09.738488    8229 logs.go:123] Gathering logs for coredns [74a37cb60d42] ...
	I0729 10:56:09.738496    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74a37cb60d42"
	I0729 10:56:09.755049    8229 logs.go:123] Gathering logs for kube-scheduler [60be90d0d8ea] ...
	I0729 10:56:09.755060    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60be90d0d8ea"
	I0729 10:56:09.769176    8229 logs.go:123] Gathering logs for container status ...
	I0729 10:56:09.769186    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:56:09.780725    8229 logs.go:123] Gathering logs for dmesg ...
	I0729 10:56:09.780738    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:56:09.785085    8229 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:56:09.785094    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:56:09.822752    8229 logs.go:123] Gathering logs for kube-apiserver [120daa333441] ...
	I0729 10:56:09.822767    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 120daa333441"
	I0729 10:56:09.837395    8229 logs.go:123] Gathering logs for kube-proxy [5a4490e00797] ...
	I0729 10:56:09.837405    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a4490e00797"
	I0729 10:56:12.798826    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:56:12.799067    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:56:12.825043    8358 logs.go:276] 2 containers: [14a4ff9d95a0 2afc138a6e36]
	I0729 10:56:12.825143    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:56:12.847686    8358 logs.go:276] 2 containers: [68a6d48feaae 4494551802a6]
	I0729 10:56:12.847759    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:56:12.860011    8358 logs.go:276] 1 containers: [fbff0b09b3af]
	I0729 10:56:12.860081    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:56:12.870536    8358 logs.go:276] 2 containers: [8ad4866a16b3 a9a637b09ebc]
	I0729 10:56:12.870608    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:56:12.880753    8358 logs.go:276] 1 containers: [cc82106fc9da]
	I0729 10:56:12.880822    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:56:12.895351    8358 logs.go:276] 2 containers: [eee01c406f30 81df750d149b]
	I0729 10:56:12.895415    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:56:12.905576    8358 logs.go:276] 0 containers: []
	W0729 10:56:12.905588    8358 logs.go:278] No container was found matching "kindnet"
	I0729 10:56:12.905653    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:56:12.920562    8358 logs.go:276] 2 containers: [7610cd881aeb e2e7fb6e4b2d]
	I0729 10:56:12.920585    8358 logs.go:123] Gathering logs for dmesg ...
	I0729 10:56:12.920591    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:56:12.925003    8358 logs.go:123] Gathering logs for etcd [68a6d48feaae] ...
	I0729 10:56:12.925011    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68a6d48feaae"
	I0729 10:56:12.351556    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:56:12.939826    8358 logs.go:123] Gathering logs for coredns [fbff0b09b3af] ...
	I0729 10:56:12.939837    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff0b09b3af"
	I0729 10:56:12.951237    8358 logs.go:123] Gathering logs for kube-scheduler [a9a637b09ebc] ...
	I0729 10:56:12.951252    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9a637b09ebc"
	I0729 10:56:12.967257    8358 logs.go:123] Gathering logs for kube-controller-manager [eee01c406f30] ...
	I0729 10:56:12.967269    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eee01c406f30"
	I0729 10:56:12.984414    8358 logs.go:123] Gathering logs for container status ...
	I0729 10:56:12.984425    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:56:13.009995    8358 logs.go:123] Gathering logs for kubelet ...
	I0729 10:56:13.010006    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:56:13.052390    8358 logs.go:123] Gathering logs for etcd [4494551802a6] ...
	I0729 10:56:13.052412    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4494551802a6"
	I0729 10:56:13.073315    8358 logs.go:123] Gathering logs for kube-proxy [cc82106fc9da] ...
	I0729 10:56:13.073328    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc82106fc9da"
	I0729 10:56:13.085088    8358 logs.go:123] Gathering logs for kube-apiserver [14a4ff9d95a0] ...
	I0729 10:56:13.085102    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14a4ff9d95a0"
	I0729 10:56:13.105012    8358 logs.go:123] Gathering logs for kube-scheduler [8ad4866a16b3] ...
	I0729 10:56:13.105025    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ad4866a16b3"
	I0729 10:56:13.116732    8358 logs.go:123] Gathering logs for kube-controller-manager [81df750d149b] ...
	I0729 10:56:13.116743    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81df750d149b"
	I0729 10:56:13.130262    8358 logs.go:123] Gathering logs for Docker ...
	I0729 10:56:13.130277    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:56:13.151644    8358 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:56:13.151654    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:56:13.186951    8358 logs.go:123] Gathering logs for kube-apiserver [2afc138a6e36] ...
	I0729 10:56:13.186961    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2afc138a6e36"
	I0729 10:56:13.224491    8358 logs.go:123] Gathering logs for storage-provisioner [7610cd881aeb] ...
	I0729 10:56:13.224506    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7610cd881aeb"
	I0729 10:56:13.236411    8358 logs.go:123] Gathering logs for storage-provisioner [e2e7fb6e4b2d] ...
	I0729 10:56:13.236422    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2e7fb6e4b2d"
	I0729 10:56:15.750561    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:56:17.353795    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:56:17.353938    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:56:17.364591    8229 logs.go:276] 1 containers: [120daa333441]
	I0729 10:56:17.364669    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:56:17.375006    8229 logs.go:276] 1 containers: [6cfb0c541a62]
	I0729 10:56:17.375073    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:56:17.385725    8229 logs.go:276] 4 containers: [571220e0392b 19d652647dcb f179b7a6916f 74a37cb60d42]
	I0729 10:56:17.385795    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:56:17.397765    8229 logs.go:276] 1 containers: [60be90d0d8ea]
	I0729 10:56:17.397829    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:56:17.408595    8229 logs.go:276] 1 containers: [5a4490e00797]
	I0729 10:56:17.408661    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:56:17.419396    8229 logs.go:276] 1 containers: [a047283c1326]
	I0729 10:56:17.419457    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:56:17.436017    8229 logs.go:276] 0 containers: []
	W0729 10:56:17.436028    8229 logs.go:278] No container was found matching "kindnet"
	I0729 10:56:17.436078    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:56:17.446407    8229 logs.go:276] 1 containers: [585ed2b764f6]
	I0729 10:56:17.446423    8229 logs.go:123] Gathering logs for kube-apiserver [120daa333441] ...
	I0729 10:56:17.446428    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 120daa333441"
	I0729 10:56:17.460516    8229 logs.go:123] Gathering logs for coredns [f179b7a6916f] ...
	I0729 10:56:17.460529    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f179b7a6916f"
	I0729 10:56:17.472306    8229 logs.go:123] Gathering logs for dmesg ...
	I0729 10:56:17.472319    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:56:17.476701    8229 logs.go:123] Gathering logs for etcd [6cfb0c541a62] ...
	I0729 10:56:17.476707    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cfb0c541a62"
	I0729 10:56:17.490860    8229 logs.go:123] Gathering logs for coredns [571220e0392b] ...
	I0729 10:56:17.490872    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571220e0392b"
	I0729 10:56:17.502857    8229 logs.go:123] Gathering logs for coredns [74a37cb60d42] ...
	I0729 10:56:17.502868    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74a37cb60d42"
	I0729 10:56:17.515175    8229 logs.go:123] Gathering logs for kube-controller-manager [a047283c1326] ...
	I0729 10:56:17.515184    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a047283c1326"
	I0729 10:56:17.533955    8229 logs.go:123] Gathering logs for storage-provisioner [585ed2b764f6] ...
	I0729 10:56:17.533970    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585ed2b764f6"
	I0729 10:56:17.545512    8229 logs.go:123] Gathering logs for container status ...
	I0729 10:56:17.545533    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:56:17.557748    8229 logs.go:123] Gathering logs for kubelet ...
	I0729 10:56:17.557761    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:56:17.595363    8229 logs.go:123] Gathering logs for coredns [19d652647dcb] ...
	I0729 10:56:17.595373    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19d652647dcb"
	I0729 10:56:17.607136    8229 logs.go:123] Gathering logs for kube-scheduler [60be90d0d8ea] ...
	I0729 10:56:17.607146    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60be90d0d8ea"
	I0729 10:56:17.621897    8229 logs.go:123] Gathering logs for kube-proxy [5a4490e00797] ...
	I0729 10:56:17.621907    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a4490e00797"
	I0729 10:56:17.633957    8229 logs.go:123] Gathering logs for Docker ...
	I0729 10:56:17.633967    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:56:17.658968    8229 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:56:17.658978    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:56:20.196069    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:56:20.752844    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:56:20.752925    8358 kubeadm.go:597] duration metric: took 4m4.086944209s to restartPrimaryControlPlane
	W0729 10:56:20.752998    8358 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 10:56:20.753032    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0729 10:56:21.780724    8358 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.027695334s)
	I0729 10:56:21.780781    8358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 10:56:21.786054    8358 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 10:56:21.788783    8358 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 10:56:21.791508    8358 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 10:56:21.791516    8358 kubeadm.go:157] found existing configuration files:
	
	I0729 10:56:21.791539    8358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51474 /etc/kubernetes/admin.conf
	I0729 10:56:21.794222    8358 kubeadm.go:163] "https://control-plane.minikube.internal:51474" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51474 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 10:56:21.794246    8358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 10:56:21.796655    8358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51474 /etc/kubernetes/kubelet.conf
	I0729 10:56:21.799636    8358 kubeadm.go:163] "https://control-plane.minikube.internal:51474" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51474 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 10:56:21.799659    8358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 10:56:21.802497    8358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51474 /etc/kubernetes/controller-manager.conf
	I0729 10:56:21.804864    8358 kubeadm.go:163] "https://control-plane.minikube.internal:51474" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51474 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 10:56:21.804886    8358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 10:56:21.807724    8358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51474 /etc/kubernetes/scheduler.conf
	I0729 10:56:21.810608    8358 kubeadm.go:163] "https://control-plane.minikube.internal:51474" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51474 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 10:56:21.810631    8358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 10:56:21.813019    8358 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 10:56:21.830570    8358 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0729 10:56:21.830621    8358 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 10:56:21.879001    8358 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 10:56:21.879058    8358 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 10:56:21.879142    8358 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 10:56:21.927603    8358 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 10:56:21.930721    8358 out.go:204]   - Generating certificates and keys ...
	I0729 10:56:21.930762    8358 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 10:56:21.930793    8358 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 10:56:21.930831    8358 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 10:56:21.930862    8358 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 10:56:21.930900    8358 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 10:56:21.930929    8358 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 10:56:21.930993    8358 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 10:56:21.931041    8358 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 10:56:21.931085    8358 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 10:56:21.931153    8358 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 10:56:21.931182    8358 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 10:56:21.931217    8358 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 10:56:21.989187    8358 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 10:56:22.055605    8358 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 10:56:22.332504    8358 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 10:56:22.379098    8358 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 10:56:22.407936    8358 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 10:56:22.408289    8358 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 10:56:22.408313    8358 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 10:56:22.493563    8358 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 10:56:22.497726    8358 out.go:204]   - Booting up control plane ...
	I0729 10:56:22.497777    8358 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 10:56:22.497816    8358 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 10:56:22.498069    8358 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 10:56:22.498542    8358 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 10:56:22.499832    8358 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 10:56:25.197137    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:56:25.197269    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:56:25.208782    8229 logs.go:276] 1 containers: [120daa333441]
	I0729 10:56:25.208861    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:56:25.219479    8229 logs.go:276] 1 containers: [6cfb0c541a62]
	I0729 10:56:25.219546    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:56:25.230726    8229 logs.go:276] 4 containers: [571220e0392b 19d652647dcb f179b7a6916f 74a37cb60d42]
	I0729 10:56:25.230803    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:56:25.243878    8229 logs.go:276] 1 containers: [60be90d0d8ea]
	I0729 10:56:25.243950    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:56:25.256462    8229 logs.go:276] 1 containers: [5a4490e00797]
	I0729 10:56:25.256529    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:56:25.267514    8229 logs.go:276] 1 containers: [a047283c1326]
	I0729 10:56:25.267581    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:56:25.278990    8229 logs.go:276] 0 containers: []
	W0729 10:56:25.279009    8229 logs.go:278] No container was found matching "kindnet"
	I0729 10:56:25.279068    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:56:25.289880    8229 logs.go:276] 1 containers: [585ed2b764f6]
	I0729 10:56:25.289897    8229 logs.go:123] Gathering logs for coredns [571220e0392b] ...
	I0729 10:56:25.289909    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571220e0392b"
	I0729 10:56:25.302046    8229 logs.go:123] Gathering logs for coredns [19d652647dcb] ...
	I0729 10:56:25.302057    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19d652647dcb"
	I0729 10:56:25.314549    8229 logs.go:123] Gathering logs for kube-controller-manager [a047283c1326] ...
	I0729 10:56:25.314561    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a047283c1326"
	I0729 10:56:25.340529    8229 logs.go:123] Gathering logs for kubelet ...
	I0729 10:56:25.340547    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:56:25.378790    8229 logs.go:123] Gathering logs for etcd [6cfb0c541a62] ...
	I0729 10:56:25.378810    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cfb0c541a62"
	I0729 10:56:25.394242    8229 logs.go:123] Gathering logs for storage-provisioner [585ed2b764f6] ...
	I0729 10:56:25.394256    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585ed2b764f6"
	I0729 10:56:25.406745    8229 logs.go:123] Gathering logs for coredns [74a37cb60d42] ...
	I0729 10:56:25.406757    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74a37cb60d42"
	I0729 10:56:25.426218    8229 logs.go:123] Gathering logs for kube-scheduler [60be90d0d8ea] ...
	I0729 10:56:25.426229    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60be90d0d8ea"
	I0729 10:56:25.441265    8229 logs.go:123] Gathering logs for kube-proxy [5a4490e00797] ...
	I0729 10:56:25.441276    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a4490e00797"
	I0729 10:56:25.453116    8229 logs.go:123] Gathering logs for Docker ...
	I0729 10:56:25.453130    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:56:25.478686    8229 logs.go:123] Gathering logs for container status ...
	I0729 10:56:25.478701    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:56:25.494558    8229 logs.go:123] Gathering logs for dmesg ...
	I0729 10:56:25.494570    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:56:25.499298    8229 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:56:25.499308    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:56:25.535941    8229 logs.go:123] Gathering logs for kube-apiserver [120daa333441] ...
	I0729 10:56:25.535952    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 120daa333441"
	I0729 10:56:25.554594    8229 logs.go:123] Gathering logs for coredns [f179b7a6916f] ...
	I0729 10:56:25.554607    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f179b7a6916f"
	I0729 10:56:27.003138    8358 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.503115 seconds
	I0729 10:56:27.003281    8358 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 10:56:27.007545    8358 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 10:56:27.516777    8358 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 10:56:27.516944    8358 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-294000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 10:56:28.021036    8358 kubeadm.go:310] [bootstrap-token] Using token: 7dco59.hhqt2q6ndro3ugx4
	I0729 10:56:28.023847    8358 out.go:204]   - Configuring RBAC rules ...
	I0729 10:56:28.023916    8358 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 10:56:28.023971    8358 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 10:56:28.028660    8358 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 10:56:28.029661    8358 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 10:56:28.030757    8358 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 10:56:28.031658    8358 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 10:56:28.035253    8358 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 10:56:28.219801    8358 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 10:56:28.424585    8358 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 10:56:28.425198    8358 kubeadm.go:310] 
	I0729 10:56:28.425300    8358 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 10:56:28.425329    8358 kubeadm.go:310] 
	I0729 10:56:28.425398    8358 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 10:56:28.425402    8358 kubeadm.go:310] 
	I0729 10:56:28.425436    8358 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 10:56:28.425471    8358 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 10:56:28.425495    8358 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 10:56:28.425497    8358 kubeadm.go:310] 
	I0729 10:56:28.425528    8358 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 10:56:28.425531    8358 kubeadm.go:310] 
	I0729 10:56:28.425557    8358 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 10:56:28.425561    8358 kubeadm.go:310] 
	I0729 10:56:28.425599    8358 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 10:56:28.425665    8358 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 10:56:28.425703    8358 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 10:56:28.425705    8358 kubeadm.go:310] 
	I0729 10:56:28.425767    8358 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 10:56:28.425808    8358 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 10:56:28.425811    8358 kubeadm.go:310] 
	I0729 10:56:28.425869    8358 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 7dco59.hhqt2q6ndro3ugx4 \
	I0729 10:56:28.425949    8358 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:8d6a503498cfac617ec351c4234f65718d8cbc12c41bd005a6931d270830028d \
	I0729 10:56:28.425961    8358 kubeadm.go:310] 	--control-plane 
	I0729 10:56:28.425964    8358 kubeadm.go:310] 
	I0729 10:56:28.426003    8358 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 10:56:28.426005    8358 kubeadm.go:310] 
	I0729 10:56:28.426045    8358 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 7dco59.hhqt2q6ndro3ugx4 \
	I0729 10:56:28.426096    8358 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:8d6a503498cfac617ec351c4234f65718d8cbc12c41bd005a6931d270830028d 
	I0729 10:56:28.426185    8358 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 10:56:28.426191    8358 cni.go:84] Creating CNI manager for ""
	I0729 10:56:28.426199    8358 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 10:56:28.429990    8358 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 10:56:28.436117    8358 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 10:56:28.439311    8358 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 10:56:28.445128    8358 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 10:56:28.445224    8358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-294000 minikube.k8s.io/updated_at=2024_07_29T10_56_28_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=de53afae5e8f4269438099f4dad14d93a8a17e35 minikube.k8s.io/name=stopped-upgrade-294000 minikube.k8s.io/primary=true
	I0729 10:56:28.445271    8358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:56:28.478168    8358 kubeadm.go:1113] duration metric: took 32.994083ms to wait for elevateKubeSystemPrivileges
	I0729 10:56:28.487697    8358 ops.go:34] apiserver oom_adj: -16
	I0729 10:56:28.487829    8358 kubeadm.go:394] duration metric: took 4m11.835398542s to StartCluster
	I0729 10:56:28.487842    8358 settings.go:142] acquiring lock: {Name:mk3ce889c5cdf5c514cbf9155d52acf6d279a087 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:56:28.487929    8358 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19339-6071/kubeconfig
	I0729 10:56:28.488336    8358 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19339-6071/kubeconfig: {Name:mkf75fdff2d3e918223b7f2dbeb4359c01007a16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:56:28.488554    8358 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:56:28.488604    8358 config.go:182] Loaded profile config "stopped-upgrade-294000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 10:56:28.488586    8358 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 10:56:28.488667    8358 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-294000"
	I0729 10:56:28.488681    8358 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-294000"
	W0729 10:56:28.488684    8358 addons.go:243] addon storage-provisioner should already be in state true
	I0729 10:56:28.488696    8358 host.go:66] Checking if "stopped-upgrade-294000" exists ...
	I0729 10:56:28.488702    8358 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-294000"
	I0729 10:56:28.488726    8358 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-294000"
	I0729 10:56:28.493081    8358 out.go:177] * Verifying Kubernetes components...
	I0729 10:56:28.493728    8358 kapi.go:59] client config for stopped-upgrade-294000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/stopped-upgrade-294000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/stopped-upgrade-294000/client.key", CAFile:"/Users/jenkins/minikube-integration/19339-6071/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1020c4080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 10:56:28.497306    8358 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-294000"
	W0729 10:56:28.497312    8358 addons.go:243] addon default-storageclass should already be in state true
	I0729 10:56:28.497320    8358 host.go:66] Checking if "stopped-upgrade-294000" exists ...
	I0729 10:56:28.497850    8358 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 10:56:28.497855    8358 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 10:56:28.497861    8358 sshutil.go:53] new ssh client: &{IP:localhost Port:51439 SSHKeyPath:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/stopped-upgrade-294000/id_rsa Username:docker}
	I0729 10:56:28.501065    8358 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 10:56:28.070340    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:56:28.504106    8358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 10:56:28.508058    8358 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 10:56:28.508065    8358 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 10:56:28.508071    8358 sshutil.go:53] new ssh client: &{IP:localhost Port:51439 SSHKeyPath:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/stopped-upgrade-294000/id_rsa Username:docker}
	I0729 10:56:28.580255    8358 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 10:56:28.585943    8358 api_server.go:52] waiting for apiserver process to appear ...
	I0729 10:56:28.585988    8358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 10:56:28.589598    8358 api_server.go:72] duration metric: took 101.034625ms to wait for apiserver process to appear ...
	I0729 10:56:28.589606    8358 api_server.go:88] waiting for apiserver healthz status ...
	I0729 10:56:28.589613    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:56:28.630061    8358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 10:56:28.638193    8358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 10:56:33.072577    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:56:33.072958    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:56:33.109701    8229 logs.go:276] 1 containers: [120daa333441]
	I0729 10:56:33.109835    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:56:33.129392    8229 logs.go:276] 1 containers: [6cfb0c541a62]
	I0729 10:56:33.129498    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:56:33.144082    8229 logs.go:276] 4 containers: [571220e0392b 19d652647dcb f179b7a6916f 74a37cb60d42]
	I0729 10:56:33.144159    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:56:33.156187    8229 logs.go:276] 1 containers: [60be90d0d8ea]
	I0729 10:56:33.156254    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:56:33.166875    8229 logs.go:276] 1 containers: [5a4490e00797]
	I0729 10:56:33.166947    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:56:33.177754    8229 logs.go:276] 1 containers: [a047283c1326]
	I0729 10:56:33.177830    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:56:33.187908    8229 logs.go:276] 0 containers: []
	W0729 10:56:33.187918    8229 logs.go:278] No container was found matching "kindnet"
	I0729 10:56:33.187981    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:56:33.199411    8229 logs.go:276] 1 containers: [585ed2b764f6]
	I0729 10:56:33.199427    8229 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:56:33.199433    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:56:33.235339    8229 logs.go:123] Gathering logs for kube-apiserver [120daa333441] ...
	I0729 10:56:33.235351    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 120daa333441"
	I0729 10:56:33.251497    8229 logs.go:123] Gathering logs for etcd [6cfb0c541a62] ...
	I0729 10:56:33.251507    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cfb0c541a62"
	I0729 10:56:33.265402    8229 logs.go:123] Gathering logs for coredns [19d652647dcb] ...
	I0729 10:56:33.265412    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19d652647dcb"
	I0729 10:56:33.277901    8229 logs.go:123] Gathering logs for coredns [f179b7a6916f] ...
	I0729 10:56:33.277913    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f179b7a6916f"
	I0729 10:56:33.289833    8229 logs.go:123] Gathering logs for kube-proxy [5a4490e00797] ...
	I0729 10:56:33.289844    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a4490e00797"
	I0729 10:56:33.301585    8229 logs.go:123] Gathering logs for dmesg ...
	I0729 10:56:33.301596    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:56:33.306639    8229 logs.go:123] Gathering logs for coredns [571220e0392b] ...
	I0729 10:56:33.306649    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571220e0392b"
	I0729 10:56:33.319243    8229 logs.go:123] Gathering logs for coredns [74a37cb60d42] ...
	I0729 10:56:33.319252    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74a37cb60d42"
	I0729 10:56:33.331376    8229 logs.go:123] Gathering logs for kube-scheduler [60be90d0d8ea] ...
	I0729 10:56:33.331386    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60be90d0d8ea"
	I0729 10:56:33.346487    8229 logs.go:123] Gathering logs for container status ...
	I0729 10:56:33.346502    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:56:33.358313    8229 logs.go:123] Gathering logs for kubelet ...
	I0729 10:56:33.358324    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:56:33.395649    8229 logs.go:123] Gathering logs for Docker ...
	I0729 10:56:33.395657    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:56:33.419167    8229 logs.go:123] Gathering logs for kube-controller-manager [a047283c1326] ...
	I0729 10:56:33.419177    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a047283c1326"
	I0729 10:56:33.450674    8229 logs.go:123] Gathering logs for storage-provisioner [585ed2b764f6] ...
	I0729 10:56:33.450684    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585ed2b764f6"
	I0729 10:56:33.591621    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:56:33.591661    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:56:35.964379    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:56:38.591961    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:56:38.592023    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:56:40.966636    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:56:40.966813    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:56:40.978142    8229 logs.go:276] 1 containers: [120daa333441]
	I0729 10:56:40.978221    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:56:40.991273    8229 logs.go:276] 1 containers: [6cfb0c541a62]
	I0729 10:56:40.991347    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:56:41.003707    8229 logs.go:276] 6 containers: [7a8a34a606b3 68fd6c91f96e 571220e0392b 19d652647dcb f179b7a6916f 74a37cb60d42]
	I0729 10:56:41.003787    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:56:41.014517    8229 logs.go:276] 1 containers: [60be90d0d8ea]
	I0729 10:56:41.014583    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:56:41.029969    8229 logs.go:276] 1 containers: [5a4490e00797]
	I0729 10:56:41.030034    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:56:41.040532    8229 logs.go:276] 1 containers: [a047283c1326]
	I0729 10:56:41.040601    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:56:41.051534    8229 logs.go:276] 0 containers: []
	W0729 10:56:41.051544    8229 logs.go:278] No container was found matching "kindnet"
	I0729 10:56:41.051600    8229 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:56:41.062482    8229 logs.go:276] 1 containers: [585ed2b764f6]
	I0729 10:56:41.062496    8229 logs.go:123] Gathering logs for container status ...
	I0729 10:56:41.062501    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:56:41.074226    8229 logs.go:123] Gathering logs for coredns [7a8a34a606b3] ...
	I0729 10:56:41.074236    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a8a34a606b3"
	I0729 10:56:41.085415    8229 logs.go:123] Gathering logs for coredns [19d652647dcb] ...
	I0729 10:56:41.085428    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19d652647dcb"
	I0729 10:56:41.101968    8229 logs.go:123] Gathering logs for coredns [74a37cb60d42] ...
	I0729 10:56:41.101981    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74a37cb60d42"
	I0729 10:56:41.115974    8229 logs.go:123] Gathering logs for coredns [f179b7a6916f] ...
	I0729 10:56:41.115985    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f179b7a6916f"
	I0729 10:56:41.133199    8229 logs.go:123] Gathering logs for kube-proxy [5a4490e00797] ...
	I0729 10:56:41.133211    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a4490e00797"
	I0729 10:56:41.145290    8229 logs.go:123] Gathering logs for Docker ...
	I0729 10:56:41.145302    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:56:41.170469    8229 logs.go:123] Gathering logs for kubelet ...
	I0729 10:56:41.170490    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:56:41.212504    8229 logs.go:123] Gathering logs for kube-apiserver [120daa333441] ...
	I0729 10:56:41.212523    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 120daa333441"
	I0729 10:56:41.227761    8229 logs.go:123] Gathering logs for coredns [68fd6c91f96e] ...
	I0729 10:56:41.227771    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68fd6c91f96e"
	I0729 10:56:41.238985    8229 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:56:41.238999    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:56:41.283639    8229 logs.go:123] Gathering logs for storage-provisioner [585ed2b764f6] ...
	I0729 10:56:41.283653    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 585ed2b764f6"
	I0729 10:56:41.297299    8229 logs.go:123] Gathering logs for kube-scheduler [60be90d0d8ea] ...
	I0729 10:56:41.297315    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60be90d0d8ea"
	I0729 10:56:41.316004    8229 logs.go:123] Gathering logs for kube-controller-manager [a047283c1326] ...
	I0729 10:56:41.316019    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a047283c1326"
	I0729 10:56:41.333870    8229 logs.go:123] Gathering logs for dmesg ...
	I0729 10:56:41.333884    8229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:56:41.338696    8229 logs.go:123] Gathering logs for etcd [6cfb0c541a62] ...
	I0729 10:56:41.338704    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cfb0c541a62"
	I0729 10:56:41.353332    8229 logs.go:123] Gathering logs for coredns [571220e0392b] ...
	I0729 10:56:41.353345    8229 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571220e0392b"
	I0729 10:56:43.867820    8229 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:56:43.592317    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:56:43.592349    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:56:48.869709    8229 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:56:48.875212    8229 out.go:177] 
	W0729 10:56:48.879117    8229 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0729 10:56:48.879132    8229 out.go:239] * 
	W0729 10:56:48.880332    8229 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 10:56:48.891147    8229 out.go:177] 
	I0729 10:56:48.592752    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:56:48.592784    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:56:53.593528    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:56:53.593573    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:56:58.593817    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:56:58.593860    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0729 10:56:58.968738    8358 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0729 10:56:58.973042    8358 out.go:177] * Enabled addons: storage-provisioner
	I0729 10:56:58.983969    8358 addons.go:510] duration metric: took 30.495914833s for enable addons: enabled=[storage-provisioner]
	
	
	==> Docker <==
	-- Journal begins at Mon 2024-07-29 17:47:59 UTC, ends at Mon 2024-07-29 17:57:05 UTC. --
	Jul 29 17:56:41 running-upgrade-504000 cri-dockerd[3067]: time="2024-07-29T17:56:41Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 29 17:56:46 running-upgrade-504000 cri-dockerd[3067]: time="2024-07-29T17:56:46Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 29 17:56:49 running-upgrade-504000 cri-dockerd[3067]: time="2024-07-29T17:56:49Z" level=error msg="ContainerStats resp: {0x4000759180 linux}"
	Jul 29 17:56:49 running-upgrade-504000 cri-dockerd[3067]: time="2024-07-29T17:56:49Z" level=error msg="ContainerStats resp: {0x4000759280 linux}"
	Jul 29 17:56:50 running-upgrade-504000 cri-dockerd[3067]: time="2024-07-29T17:56:50Z" level=error msg="ContainerStats resp: {0x400097a080 linux}"
	Jul 29 17:56:51 running-upgrade-504000 cri-dockerd[3067]: time="2024-07-29T17:56:51Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 29 17:56:51 running-upgrade-504000 cri-dockerd[3067]: time="2024-07-29T17:56:51Z" level=error msg="ContainerStats resp: {0x400097ac80 linux}"
	Jul 29 17:56:51 running-upgrade-504000 cri-dockerd[3067]: time="2024-07-29T17:56:51Z" level=error msg="ContainerStats resp: {0x400097b0c0 linux}"
	Jul 29 17:56:51 running-upgrade-504000 cri-dockerd[3067]: time="2024-07-29T17:56:51Z" level=error msg="ContainerStats resp: {0x4000a66180 linux}"
	Jul 29 17:56:51 running-upgrade-504000 cri-dockerd[3067]: time="2024-07-29T17:56:51Z" level=error msg="ContainerStats resp: {0x400097bc00 linux}"
	Jul 29 17:56:51 running-upgrade-504000 cri-dockerd[3067]: time="2024-07-29T17:56:51Z" level=error msg="ContainerStats resp: {0x4000a66980 linux}"
	Jul 29 17:56:51 running-upgrade-504000 cri-dockerd[3067]: time="2024-07-29T17:56:51Z" level=error msg="ContainerStats resp: {0x4000a66b40 linux}"
	Jul 29 17:56:51 running-upgrade-504000 cri-dockerd[3067]: time="2024-07-29T17:56:51Z" level=error msg="ContainerStats resp: {0x400062f400 linux}"
	Jul 29 17:56:56 running-upgrade-504000 cri-dockerd[3067]: time="2024-07-29T17:56:56Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 29 17:57:01 running-upgrade-504000 cri-dockerd[3067]: time="2024-07-29T17:57:01Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 29 17:57:01 running-upgrade-504000 cri-dockerd[3067]: time="2024-07-29T17:57:01Z" level=error msg="ContainerStats resp: {0x400097a240 linux}"
	Jul 29 17:57:01 running-upgrade-504000 cri-dockerd[3067]: time="2024-07-29T17:57:01Z" level=error msg="ContainerStats resp: {0x400097a380 linux}"
	Jul 29 17:57:02 running-upgrade-504000 cri-dockerd[3067]: time="2024-07-29T17:57:02Z" level=error msg="ContainerStats resp: {0x40000bfdc0 linux}"
	Jul 29 17:57:03 running-upgrade-504000 cri-dockerd[3067]: time="2024-07-29T17:57:03Z" level=error msg="ContainerStats resp: {0x400062f5c0 linux}"
	Jul 29 17:57:03 running-upgrade-504000 cri-dockerd[3067]: time="2024-07-29T17:57:03Z" level=error msg="ContainerStats resp: {0x400097b180 linux}"
	Jul 29 17:57:03 running-upgrade-504000 cri-dockerd[3067]: time="2024-07-29T17:57:03Z" level=error msg="ContainerStats resp: {0x400067c080 linux}"
	Jul 29 17:57:03 running-upgrade-504000 cri-dockerd[3067]: time="2024-07-29T17:57:03Z" level=error msg="ContainerStats resp: {0x400097b540 linux}"
	Jul 29 17:57:03 running-upgrade-504000 cri-dockerd[3067]: time="2024-07-29T17:57:03Z" level=error msg="ContainerStats resp: {0x400067cb40 linux}"
	Jul 29 17:57:03 running-upgrade-504000 cri-dockerd[3067]: time="2024-07-29T17:57:03Z" level=error msg="ContainerStats resp: {0x400067cd00 linux}"
	Jul 29 17:57:03 running-upgrade-504000 cri-dockerd[3067]: time="2024-07-29T17:57:03Z" level=error msg="ContainerStats resp: {0x400067d4c0 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	7a8a34a606b3c       edaa71f2aee88       25 seconds ago      Running             coredns                   2                   2445fd0564a27
	68fd6c91f96ed       edaa71f2aee88       25 seconds ago      Running             coredns                   2                   6a6b62d3b8876
	571220e0392b2       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   2445fd0564a27
	19d652647dcb8       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   6a6b62d3b8876
	585ed2b764f6f       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   fa2cea4c50f9c
	5a4490e007978       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   0a683334359f3
	60be90d0d8eaa       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   3c126d1a992cf
	a047283c1326b       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   0048677f62e68
	6cfb0c541a62d       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   0f60dc6f040bf
	120daa3334418       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   30abc0da2f890
	
	
	==> coredns [19d652647dcb] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 3444638834726205842.6117960701856314691. HINFO: read udp 10.244.0.3:38194->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3444638834726205842.6117960701856314691. HINFO: read udp 10.244.0.3:34490->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3444638834726205842.6117960701856314691. HINFO: read udp 10.244.0.3:52100->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3444638834726205842.6117960701856314691. HINFO: read udp 10.244.0.3:57084->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3444638834726205842.6117960701856314691. HINFO: read udp 10.244.0.3:41244->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3444638834726205842.6117960701856314691. HINFO: read udp 10.244.0.3:58020->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3444638834726205842.6117960701856314691. HINFO: read udp 10.244.0.3:58113->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3444638834726205842.6117960701856314691. HINFO: read udp 10.244.0.3:53971->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3444638834726205842.6117960701856314691. HINFO: read udp 10.244.0.3:46224->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3444638834726205842.6117960701856314691. HINFO: read udp 10.244.0.3:60264->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [571220e0392b] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 3548563698416950351.3616246470943306268. HINFO: read udp 10.244.0.2:57838->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3548563698416950351.3616246470943306268. HINFO: read udp 10.244.0.2:42669->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3548563698416950351.3616246470943306268. HINFO: read udp 10.244.0.2:41099->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3548563698416950351.3616246470943306268. HINFO: read udp 10.244.0.2:42945->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3548563698416950351.3616246470943306268. HINFO: read udp 10.244.0.2:52401->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3548563698416950351.3616246470943306268. HINFO: read udp 10.244.0.2:60609->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3548563698416950351.3616246470943306268. HINFO: read udp 10.244.0.2:44776->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3548563698416950351.3616246470943306268. HINFO: read udp 10.244.0.2:44177->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3548563698416950351.3616246470943306268. HINFO: read udp 10.244.0.2:49327->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3548563698416950351.3616246470943306268. HINFO: read udp 10.244.0.2:36303->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [68fd6c91f96e] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 2693985312647731015.2174148578858965809. HINFO: read udp 10.244.0.3:56337->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2693985312647731015.2174148578858965809. HINFO: read udp 10.244.0.3:46970->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2693985312647731015.2174148578858965809. HINFO: read udp 10.244.0.3:58510->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2693985312647731015.2174148578858965809. HINFO: read udp 10.244.0.3:45199->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2693985312647731015.2174148578858965809. HINFO: read udp 10.244.0.3:34063->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2693985312647731015.2174148578858965809. HINFO: read udp 10.244.0.3:51898->10.0.2.3:53: i/o timeout
	
	
	==> coredns [7a8a34a606b3] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 5088275412259474996.5004075750416481447. HINFO: read udp 10.244.0.2:42392->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5088275412259474996.5004075750416481447. HINFO: read udp 10.244.0.2:39304->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5088275412259474996.5004075750416481447. HINFO: read udp 10.244.0.2:51863->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5088275412259474996.5004075750416481447. HINFO: read udp 10.244.0.2:39679->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5088275412259474996.5004075750416481447. HINFO: read udp 10.244.0.2:53434->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5088275412259474996.5004075750416481447. HINFO: read udp 10.244.0.2:39361->10.0.2.3:53: i/o timeout
	
	
	==> describe nodes <==
	Name:               running-upgrade-504000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-504000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=de53afae5e8f4269438099f4dad14d93a8a17e35
	                    minikube.k8s.io/name=running-upgrade-504000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T10_52_47_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 17:52:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-504000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 17:57:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 17:52:47 +0000   Mon, 29 Jul 2024 17:52:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 17:52:47 +0000   Mon, 29 Jul 2024 17:52:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 17:52:47 +0000   Mon, 29 Jul 2024 17:52:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 17:52:47 +0000   Mon, 29 Jul 2024 17:52:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-504000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 10d39b54958d4b01aaf20be34ea77329
	  System UUID:                10d39b54958d4b01aaf20be34ea77329
	  Boot ID:                    ad52d9f5-c449-43ff-9f66-daed85f66211
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-2zmff                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m3s
	  kube-system                 coredns-6d4b75cb6d-w5q6k                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m3s
	  kube-system                 etcd-running-upgrade-504000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m19s
	  kube-system                 kube-apiserver-running-upgrade-504000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m18s
	  kube-system                 kube-controller-manager-running-upgrade-504000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m18s
	  kube-system                 kube-proxy-sjtjj                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m4s
	  kube-system                 kube-scheduler-running-upgrade-504000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m2s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m24s (x5 over 4m24s)  kubelet          Node running-upgrade-504000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m24s (x4 over 4m24s)  kubelet          Node running-upgrade-504000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m24s (x4 over 4m24s)  kubelet          Node running-upgrade-504000 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  4m18s                  kubelet          Node running-upgrade-504000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  4m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    4m18s                  kubelet          Node running-upgrade-504000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m18s                  kubelet          Node running-upgrade-504000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m18s                  kubelet          Node running-upgrade-504000 status is now: NodeReady
	  Normal  Starting                 4m18s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m4s                   node-controller  Node running-upgrade-504000 event: Registered Node running-upgrade-504000 in Controller
	
	
	==> dmesg <==
	[  +1.710189] systemd-fstab-generator[880]: Ignoring "noauto" for root device
	[  +0.065382] systemd-fstab-generator[891]: Ignoring "noauto" for root device
	[  +0.063840] systemd-fstab-generator[902]: Ignoring "noauto" for root device
	[  +1.134410] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.072308] systemd-fstab-generator[1052]: Ignoring "noauto" for root device
	[  +0.060250] systemd-fstab-generator[1063]: Ignoring "noauto" for root device
	[  +2.036560] systemd-fstab-generator[1290]: Ignoring "noauto" for root device
	[  +9.146005] systemd-fstab-generator[1936]: Ignoring "noauto" for root device
	[  +2.386585] systemd-fstab-generator[2218]: Ignoring "noauto" for root device
	[  +0.129864] systemd-fstab-generator[2254]: Ignoring "noauto" for root device
	[  +0.082287] systemd-fstab-generator[2265]: Ignoring "noauto" for root device
	[  +0.088791] systemd-fstab-generator[2281]: Ignoring "noauto" for root device
	[  +2.875140] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.186713] systemd-fstab-generator[3021]: Ignoring "noauto" for root device
	[  +0.069652] systemd-fstab-generator[3035]: Ignoring "noauto" for root device
	[  +0.063693] systemd-fstab-generator[3046]: Ignoring "noauto" for root device
	[  +0.080454] systemd-fstab-generator[3060]: Ignoring "noauto" for root device
	[  +2.225860] systemd-fstab-generator[3212]: Ignoring "noauto" for root device
	[  +3.712843] systemd-fstab-generator[3606]: Ignoring "noauto" for root device
	[  +1.629501] systemd-fstab-generator[4190]: Ignoring "noauto" for root device
	[ +20.110238] kauditd_printk_skb: 68 callbacks suppressed
	[Jul29 17:52] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.539825] systemd-fstab-generator[12027]: Ignoring "noauto" for root device
	[  +6.140318] systemd-fstab-generator[12623]: Ignoring "noauto" for root device
	[  +0.450071] systemd-fstab-generator[12756]: Ignoring "noauto" for root device
	
	
	==> etcd [6cfb0c541a62] <==
	{"level":"info","ts":"2024-07-29T17:52:42.722Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-29T17:52:42.722Z","caller":"etcdserver/server.go:736","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"f074a195de705325","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-07-29T17:52:42.722Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-07-29T17:52:42.722Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-07-29T17:52:42.722Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-07-29T17:52:42.722Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-07-29T17:52:42.722Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T17:52:43.708Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-29T17:52:43.708Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-29T17:52:43.708Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-07-29T17:52:43.708Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-07-29T17:52:43.708Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-07-29T17:52:43.708Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-07-29T17:52:43.708Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-07-29T17:52:43.708Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T17:52:43.710Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T17:52:43.710Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T17:52:43.710Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-504000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T17:52:43.710Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T17:52:43.710Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T17:52:43.710Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T17:52:43.713Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-07-29T17:52:43.713Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T17:52:43.713Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T17:52:43.713Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 17:57:05 up 9 min,  0 users,  load average: 0.19, 0.34, 0.21
	Linux running-upgrade-504000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [120daa333441] <==
	I0729 17:52:44.901768       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0729 17:52:44.911669       1 controller.go:611] quota admission added evaluator for: namespaces
	I0729 17:52:44.954851       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0729 17:52:44.954867       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0729 17:52:44.954856       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0729 17:52:44.955883       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0729 17:52:44.956449       1 cache.go:39] Caches are synced for autoregister controller
	I0729 17:52:45.695775       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0729 17:52:45.859530       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0729 17:52:45.863299       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0729 17:52:45.863338       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0729 17:52:45.987790       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0729 17:52:45.997355       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0729 17:52:46.015837       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0729 17:52:46.017996       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0729 17:52:46.018371       1 controller.go:611] quota admission added evaluator for: endpoints
	I0729 17:52:46.019663       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0729 17:52:46.987788       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0729 17:52:47.702778       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0729 17:52:47.709293       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0729 17:52:47.714952       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0729 17:52:47.760129       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0729 17:53:01.843415       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0729 17:53:01.942986       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0729 17:53:02.368005       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [a047283c1326] <==
	I0729 17:53:01.191727       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I0729 17:53:01.191775       1 shared_informer.go:262] Caches are synced for crt configmap
	I0729 17:53:01.191816       1 shared_informer.go:262] Caches are synced for PVC protection
	I0729 17:53:01.191829       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0729 17:53:01.191891       1 event.go:294] "Event occurred" object="running-upgrade-504000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-504000 event: Registered Node running-upgrade-504000 in Controller"
	I0729 17:53:01.192677       1 shared_informer.go:262] Caches are synced for cronjob
	I0729 17:53:01.194221       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0729 17:53:01.194949       1 shared_informer.go:262] Caches are synced for job
	I0729 17:53:01.198174       1 shared_informer.go:262] Caches are synced for stateful set
	I0729 17:53:01.241409       1 shared_informer.go:262] Caches are synced for disruption
	I0729 17:53:01.241422       1 disruption.go:371] Sending events to api server.
	I0729 17:53:01.331397       1 shared_informer.go:262] Caches are synced for PV protection
	I0729 17:53:01.342788       1 shared_informer.go:262] Caches are synced for persistent volume
	I0729 17:53:01.349691       1 shared_informer.go:262] Caches are synced for expand
	I0729 17:53:01.394008       1 shared_informer.go:262] Caches are synced for attach detach
	I0729 17:53:01.398885       1 shared_informer.go:262] Caches are synced for resource quota
	I0729 17:53:01.404266       1 shared_informer.go:262] Caches are synced for resource quota
	I0729 17:53:01.442856       1 shared_informer.go:262] Caches are synced for HPA
	I0729 17:53:01.812165       1 shared_informer.go:262] Caches are synced for garbage collector
	I0729 17:53:01.840794       1 shared_informer.go:262] Caches are synced for garbage collector
	I0729 17:53:01.840875       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0729 17:53:01.846955       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-sjtjj"
	I0729 17:53:01.944324       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0729 17:53:02.195573       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-2zmff"
	I0729 17:53:02.198931       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-w5q6k"
	
	
	==> kube-proxy [5a4490e00797] <==
	I0729 17:53:02.354776       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0729 17:53:02.354835       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0729 17:53:02.354849       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0729 17:53:02.365984       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0729 17:53:02.365997       1 server_others.go:206] "Using iptables Proxier"
	I0729 17:53:02.366027       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0729 17:53:02.366176       1 server.go:661] "Version info" version="v1.24.1"
	I0729 17:53:02.366206       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 17:53:02.366538       1 config.go:317] "Starting service config controller"
	I0729 17:53:02.366550       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0729 17:53:02.366590       1 config.go:226] "Starting endpoint slice config controller"
	I0729 17:53:02.366597       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0729 17:53:02.366895       1 config.go:444] "Starting node config controller"
	I0729 17:53:02.366929       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0729 17:53:02.467349       1 shared_informer.go:262] Caches are synced for node config
	I0729 17:53:02.467366       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0729 17:53:02.467374       1 shared_informer.go:262] Caches are synced for service config
	
	
	==> kube-scheduler [60be90d0d8ea] <==
	W0729 17:52:44.908032       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 17:52:44.908259       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0729 17:52:44.908186       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 17:52:44.908263       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0729 17:52:44.908199       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 17:52:44.908320       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0729 17:52:44.908224       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 17:52:44.908334       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0729 17:52:44.908435       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 17:52:44.908453       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 17:52:44.908478       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 17:52:44.908488       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0729 17:52:44.909390       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 17:52:44.909435       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0729 17:52:44.909469       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 17:52:44.909500       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0729 17:52:44.909539       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 17:52:44.909584       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0729 17:52:44.909616       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 17:52:44.909641       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0729 17:52:45.757342       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 17:52:45.757431       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 17:52:45.831633       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 17:52:45.831657       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0729 17:52:46.104767       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-07-29 17:47:59 UTC, ends at Mon 2024-07-29 17:57:05 UTC. --
	Jul 29 17:52:49 running-upgrade-504000 kubelet[12629]: I0729 17:52:49.791513   12629 kubelet_getters.go:300] "Path does not exist" path="/var/lib/kubelet/pods/e7e224bf-7909-4d11-abd1-421b9b354184/volumes"
	Jul 29 17:53:01 running-upgrade-504000 kubelet[12629]: I0729 17:53:01.139680   12629 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 29 17:53:01 running-upgrade-504000 kubelet[12629]: I0729 17:53:01.140093   12629 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 29 17:53:01 running-upgrade-504000 kubelet[12629]: I0729 17:53:01.197067   12629 topology_manager.go:200] "Topology Admit Handler"
	Jul 29 17:53:01 running-upgrade-504000 kubelet[12629]: I0729 17:53:01.341575   12629 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/86f59704-c1f4-4268-a873-d05ea3fb08a4-tmp\") pod \"storage-provisioner\" (UID: \"86f59704-c1f4-4268-a873-d05ea3fb08a4\") " pod="kube-system/storage-provisioner"
	Jul 29 17:53:01 running-upgrade-504000 kubelet[12629]: I0729 17:53:01.341600   12629 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p427n\" (UniqueName: \"kubernetes.io/projected/86f59704-c1f4-4268-a873-d05ea3fb08a4-kube-api-access-p427n\") pod \"storage-provisioner\" (UID: \"86f59704-c1f4-4268-a873-d05ea3fb08a4\") " pod="kube-system/storage-provisioner"
	Jul 29 17:53:01 running-upgrade-504000 kubelet[12629]: E0729 17:53:01.446208   12629 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Jul 29 17:53:01 running-upgrade-504000 kubelet[12629]: E0729 17:53:01.446230   12629 projected.go:192] Error preparing data for projected volume kube-api-access-p427n for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Jul 29 17:53:01 running-upgrade-504000 kubelet[12629]: E0729 17:53:01.446270   12629 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/86f59704-c1f4-4268-a873-d05ea3fb08a4-kube-api-access-p427n podName:86f59704-c1f4-4268-a873-d05ea3fb08a4 nodeName:}" failed. No retries permitted until 2024-07-29 17:53:01.94625583 +0000 UTC m=+14.254963814 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-p427n" (UniqueName: "kubernetes.io/projected/86f59704-c1f4-4268-a873-d05ea3fb08a4-kube-api-access-p427n") pod "storage-provisioner" (UID: "86f59704-c1f4-4268-a873-d05ea3fb08a4") : configmap "kube-root-ca.crt" not found
	Jul 29 17:53:01 running-upgrade-504000 kubelet[12629]: I0729 17:53:01.851528   12629 topology_manager.go:200] "Topology Admit Handler"
	Jul 29 17:53:02 running-upgrade-504000 kubelet[12629]: I0729 17:53:02.045037   12629 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8a4fe213-2942-44cf-8b24-ff5bf528178d-xtables-lock\") pod \"kube-proxy-sjtjj\" (UID: \"8a4fe213-2942-44cf-8b24-ff5bf528178d\") " pod="kube-system/kube-proxy-sjtjj"
	Jul 29 17:53:02 running-upgrade-504000 kubelet[12629]: I0729 17:53:02.045240   12629 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8a4fe213-2942-44cf-8b24-ff5bf528178d-lib-modules\") pod \"kube-proxy-sjtjj\" (UID: \"8a4fe213-2942-44cf-8b24-ff5bf528178d\") " pod="kube-system/kube-proxy-sjtjj"
	Jul 29 17:53:02 running-upgrade-504000 kubelet[12629]: I0729 17:53:02.045274   12629 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-974cw\" (UniqueName: \"kubernetes.io/projected/8a4fe213-2942-44cf-8b24-ff5bf528178d-kube-api-access-974cw\") pod \"kube-proxy-sjtjj\" (UID: \"8a4fe213-2942-44cf-8b24-ff5bf528178d\") " pod="kube-system/kube-proxy-sjtjj"
	Jul 29 17:53:02 running-upgrade-504000 kubelet[12629]: I0729 17:53:02.045335   12629 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8a4fe213-2942-44cf-8b24-ff5bf528178d-kube-proxy\") pod \"kube-proxy-sjtjj\" (UID: \"8a4fe213-2942-44cf-8b24-ff5bf528178d\") " pod="kube-system/kube-proxy-sjtjj"
	Jul 29 17:53:02 running-upgrade-504000 kubelet[12629]: E0729 17:53:02.045188   12629 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Jul 29 17:53:02 running-upgrade-504000 kubelet[12629]: E0729 17:53:02.045405   12629 projected.go:192] Error preparing data for projected volume kube-api-access-p427n for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Jul 29 17:53:02 running-upgrade-504000 kubelet[12629]: E0729 17:53:02.045445   12629 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/86f59704-c1f4-4268-a873-d05ea3fb08a4-kube-api-access-p427n podName:86f59704-c1f4-4268-a873-d05ea3fb08a4 nodeName:}" failed. No retries permitted until 2024-07-29 17:53:03.045435316 +0000 UTC m=+15.354143299 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-p427n" (UniqueName: "kubernetes.io/projected/86f59704-c1f4-4268-a873-d05ea3fb08a4-kube-api-access-p427n") pod "storage-provisioner" (UID: "86f59704-c1f4-4268-a873-d05ea3fb08a4") : configmap "kube-root-ca.crt" not found
	Jul 29 17:53:02 running-upgrade-504000 kubelet[12629]: I0729 17:53:02.198634   12629 topology_manager.go:200] "Topology Admit Handler"
	Jul 29 17:53:02 running-upgrade-504000 kubelet[12629]: I0729 17:53:02.212605   12629 topology_manager.go:200] "Topology Admit Handler"
	Jul 29 17:53:02 running-upgrade-504000 kubelet[12629]: I0729 17:53:02.246584   12629 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lbgj\" (UniqueName: \"kubernetes.io/projected/73fb9047-6590-4fc5-87d5-e7d76f08cce7-kube-api-access-5lbgj\") pod \"coredns-6d4b75cb6d-w5q6k\" (UID: \"73fb9047-6590-4fc5-87d5-e7d76f08cce7\") " pod="kube-system/coredns-6d4b75cb6d-w5q6k"
	Jul 29 17:53:02 running-upgrade-504000 kubelet[12629]: I0729 17:53:02.246612   12629 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/73fb9047-6590-4fc5-87d5-e7d76f08cce7-config-volume\") pod \"coredns-6d4b75cb6d-w5q6k\" (UID: \"73fb9047-6590-4fc5-87d5-e7d76f08cce7\") " pod="kube-system/coredns-6d4b75cb6d-w5q6k"
	Jul 29 17:53:02 running-upgrade-504000 kubelet[12629]: I0729 17:53:02.246631   12629 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18d4322d-d06e-4751-b2f2-40a28b04b105-config-volume\") pod \"coredns-6d4b75cb6d-2zmff\" (UID: \"18d4322d-d06e-4751-b2f2-40a28b04b105\") " pod="kube-system/coredns-6d4b75cb6d-2zmff"
	Jul 29 17:53:02 running-upgrade-504000 kubelet[12629]: I0729 17:53:02.246641   12629 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cm7m9\" (UniqueName: \"kubernetes.io/projected/18d4322d-d06e-4751-b2f2-40a28b04b105-kube-api-access-cm7m9\") pod \"coredns-6d4b75cb6d-2zmff\" (UID: \"18d4322d-d06e-4751-b2f2-40a28b04b105\") " pod="kube-system/coredns-6d4b75cb6d-2zmff"
	Jul 29 17:56:41 running-upgrade-504000 kubelet[12629]: I0729 17:56:41.239818   12629 scope.go:110] "RemoveContainer" containerID="74a37cb60d42433d216f9842b8f73e5069ad2a3f740cb4b39b2b61fbb17015d4"
	Jul 29 17:56:41 running-upgrade-504000 kubelet[12629]: I0729 17:56:41.258283   12629 scope.go:110] "RemoveContainer" containerID="f179b7a6916fb433862a24c3078b893fe02ed8120bfc29405d31617feaa6977f"
	
	
	==> storage-provisioner [585ed2b764f6] <==
	I0729 17:53:03.495926       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 17:53:03.500607       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 17:53:03.500896       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0729 17:53:03.506192       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0729 17:53:03.506269       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"288c3c00-848a-4656-8933-c35110dd6119", APIVersion:"v1", ResourceVersion:"384", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-504000_632c73b4-dc4a-42e3-bc76-4f8aca085f48 became leader
	I0729 17:53:03.506318       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-504000_632c73b4-dc4a-42e3-bc76-4f8aca085f48!
	I0729 17:53:03.607001       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-504000_632c73b4-dc4a-42e3-bc76-4f8aca085f48!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-504000 -n running-upgrade-504000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-504000 -n running-upgrade-504000: exit status 2 (15.611300583s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-504000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-504000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-504000
--- FAIL: TestRunningBinaryUpgrade (586.59s)

                                                
                                    
x
+
TestKubernetesUpgrade (18.17s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-786000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-786000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.902545792s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-786000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19339
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-786000" primary control-plane node in "kubernetes-upgrade-786000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-786000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:50:34.726681    8291 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:50:34.726836    8291 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:50:34.726841    8291 out.go:304] Setting ErrFile to fd 2...
	I0729 10:50:34.726844    8291 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:50:34.726975    8291 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 10:50:34.728120    8291 out.go:298] Setting JSON to false
	I0729 10:50:34.744739    8291 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4803,"bootTime":1722270631,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 10:50:34.744805    8291 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 10:50:34.750738    8291 out.go:177] * [kubernetes-upgrade-786000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 10:50:34.757796    8291 out.go:177]   - MINIKUBE_LOCATION=19339
	I0729 10:50:34.757858    8291 notify.go:220] Checking for updates...
	I0729 10:50:34.764762    8291 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	I0729 10:50:34.766249    8291 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 10:50:34.769767    8291 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:50:34.772902    8291 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	I0729 10:50:34.775786    8291 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 10:50:34.779170    8291 config.go:182] Loaded profile config "multinode-263000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:50:34.779239    8291 config.go:182] Loaded profile config "running-upgrade-504000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 10:50:34.779295    8291 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:50:34.783744    8291 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 10:50:34.790709    8291 start.go:297] selected driver: qemu2
	I0729 10:50:34.790719    8291 start.go:901] validating driver "qemu2" against <nil>
	I0729 10:50:34.790728    8291 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:50:34.793093    8291 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 10:50:34.795735    8291 out.go:177] * Automatically selected the socket_vmnet network
	I0729 10:50:34.798865    8291 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 10:50:34.798911    8291 cni.go:84] Creating CNI manager for ""
	I0729 10:50:34.798925    8291 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0729 10:50:34.798967    8291 start.go:340] cluster config:
	{Name:kubernetes-upgrade-786000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-786000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:50:34.802523    8291 iso.go:125] acquiring lock: {Name:mk2808e0b9510c77af2c0862d3450f3cc996acba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:50:34.809750    8291 out.go:177] * Starting "kubernetes-upgrade-786000" primary control-plane node in "kubernetes-upgrade-786000" cluster
	I0729 10:50:34.812724    8291 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 10:50:34.812742    8291 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0729 10:50:34.812756    8291 cache.go:56] Caching tarball of preloaded images
	I0729 10:50:34.812824    8291 preload.go:172] Found /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 10:50:34.812830    8291 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0729 10:50:34.812889    8291 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/kubernetes-upgrade-786000/config.json ...
	I0729 10:50:34.812901    8291 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/kubernetes-upgrade-786000/config.json: {Name:mkd82baf803a1488e52731e7d36a7b6a5a66e241 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:50:34.813259    8291 start.go:360] acquireMachinesLock for kubernetes-upgrade-786000: {Name:mk9e50f97d2386769ca02400541d9347efa17b63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:50:34.813293    8291 start.go:364] duration metric: took 27.458µs to acquireMachinesLock for "kubernetes-upgrade-786000"
	I0729 10:50:34.813303    8291 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-786000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-786000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:50:34.813343    8291 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 10:50:34.821609    8291 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 10:50:34.839470    8291 start.go:159] libmachine.API.Create for "kubernetes-upgrade-786000" (driver="qemu2")
	I0729 10:50:34.839495    8291 client.go:168] LocalClient.Create starting
	I0729 10:50:34.839563    8291 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/ca.pem
	I0729 10:50:34.839595    8291 main.go:141] libmachine: Decoding PEM data...
	I0729 10:50:34.839607    8291 main.go:141] libmachine: Parsing certificate...
	I0729 10:50:34.839652    8291 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/cert.pem
	I0729 10:50:34.839675    8291 main.go:141] libmachine: Decoding PEM data...
	I0729 10:50:34.839684    8291 main.go:141] libmachine: Parsing certificate...
	I0729 10:50:34.840036    8291 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19339-6071/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0729 10:50:35.054744    8291 main.go:141] libmachine: Creating SSH key...
	I0729 10:50:35.195358    8291 main.go:141] libmachine: Creating Disk image...
	I0729 10:50:35.195366    8291 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 10:50:35.195582    8291 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/kubernetes-upgrade-786000/disk.qcow2.raw /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/kubernetes-upgrade-786000/disk.qcow2
	I0729 10:50:35.205021    8291 main.go:141] libmachine: STDOUT: 
	I0729 10:50:35.205037    8291 main.go:141] libmachine: STDERR: 
	I0729 10:50:35.205083    8291 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/kubernetes-upgrade-786000/disk.qcow2 +20000M
	I0729 10:50:35.212969    8291 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 10:50:35.212990    8291 main.go:141] libmachine: STDERR: 
	I0729 10:50:35.213005    8291 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/kubernetes-upgrade-786000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/kubernetes-upgrade-786000/disk.qcow2
	I0729 10:50:35.213010    8291 main.go:141] libmachine: Starting QEMU VM...
	I0729 10:50:35.213020    8291 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:50:35.213046    8291 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/kubernetes-upgrade-786000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/kubernetes-upgrade-786000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/kubernetes-upgrade-786000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:7a:15:d8:48:54 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/kubernetes-upgrade-786000/disk.qcow2
	I0729 10:50:35.214714    8291 main.go:141] libmachine: STDOUT: 
	I0729 10:50:35.214728    8291 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:50:35.214748    8291 client.go:171] duration metric: took 375.254916ms to LocalClient.Create
	I0729 10:50:37.216828    8291 start.go:128] duration metric: took 2.403509375s to createHost
	I0729 10:50:37.216865    8291 start.go:83] releasing machines lock for "kubernetes-upgrade-786000", held for 2.403605083s
	W0729 10:50:37.216904    8291 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:50:37.225573    8291 out.go:177] * Deleting "kubernetes-upgrade-786000" in qemu2 ...
	W0729 10:50:37.237077    8291 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:50:37.237094    8291 start.go:729] Will try again in 5 seconds ...
	I0729 10:50:42.238217    8291 start.go:360] acquireMachinesLock for kubernetes-upgrade-786000: {Name:mk9e50f97d2386769ca02400541d9347efa17b63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:50:42.238477    8291 start.go:364] duration metric: took 201.125µs to acquireMachinesLock for "kubernetes-upgrade-786000"
	I0729 10:50:42.239011    8291 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-786000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-786000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:50:42.239174    8291 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 10:50:42.244580    8291 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 10:50:42.273554    8291 start.go:159] libmachine.API.Create for "kubernetes-upgrade-786000" (driver="qemu2")
	I0729 10:50:42.273593    8291 client.go:168] LocalClient.Create starting
	I0729 10:50:42.273679    8291 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/ca.pem
	I0729 10:50:42.273730    8291 main.go:141] libmachine: Decoding PEM data...
	I0729 10:50:42.273744    8291 main.go:141] libmachine: Parsing certificate...
	I0729 10:50:42.273791    8291 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/cert.pem
	I0729 10:50:42.273821    8291 main.go:141] libmachine: Decoding PEM data...
	I0729 10:50:42.273829    8291 main.go:141] libmachine: Parsing certificate...
	I0729 10:50:42.274319    8291 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19339-6071/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0729 10:50:42.430263    8291 main.go:141] libmachine: Creating SSH key...
	I0729 10:50:42.534696    8291 main.go:141] libmachine: Creating Disk image...
	I0729 10:50:42.534707    8291 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 10:50:42.534942    8291 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/kubernetes-upgrade-786000/disk.qcow2.raw /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/kubernetes-upgrade-786000/disk.qcow2
	I0729 10:50:42.544259    8291 main.go:141] libmachine: STDOUT: 
	I0729 10:50:42.544279    8291 main.go:141] libmachine: STDERR: 
	I0729 10:50:42.544334    8291 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/kubernetes-upgrade-786000/disk.qcow2 +20000M
	I0729 10:50:42.552300    8291 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 10:50:42.552314    8291 main.go:141] libmachine: STDERR: 
	I0729 10:50:42.552323    8291 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/kubernetes-upgrade-786000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/kubernetes-upgrade-786000/disk.qcow2
	I0729 10:50:42.552341    8291 main.go:141] libmachine: Starting QEMU VM...
	I0729 10:50:42.552349    8291 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:50:42.552373    8291 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/kubernetes-upgrade-786000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/kubernetes-upgrade-786000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/kubernetes-upgrade-786000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:c5:30:ff:82:20 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/kubernetes-upgrade-786000/disk.qcow2
	I0729 10:50:42.554010    8291 main.go:141] libmachine: STDOUT: 
	I0729 10:50:42.554022    8291 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:50:42.554035    8291 client.go:171] duration metric: took 280.443833ms to LocalClient.Create
	I0729 10:50:44.556204    8291 start.go:128] duration metric: took 2.317022375s to createHost
	I0729 10:50:44.556259    8291 start.go:83] releasing machines lock for "kubernetes-upgrade-786000", held for 2.317797667s
	W0729 10:50:44.556640    8291 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-786000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-786000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:50:44.566207    8291 out.go:177] 
	W0729 10:50:44.573372    8291 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 10:50:44.573401    8291 out.go:239] * 
	* 
	W0729 10:50:44.575862    8291 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 10:50:44.586198    8291 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-786000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-786000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-786000: (2.845479459s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-786000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-786000 status --format={{.Host}}: exit status 7 (58.836875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-786000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-786000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.197266s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-786000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19339
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-786000" primary control-plane node in "kubernetes-upgrade-786000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-786000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-786000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:50:47.536618    8325 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:50:47.536752    8325 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:50:47.536756    8325 out.go:304] Setting ErrFile to fd 2...
	I0729 10:50:47.536758    8325 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:50:47.536881    8325 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 10:50:47.537807    8325 out.go:298] Setting JSON to false
	I0729 10:50:47.554251    8325 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4816,"bootTime":1722270631,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 10:50:47.554320    8325 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 10:50:47.558771    8325 out.go:177] * [kubernetes-upgrade-786000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 10:50:47.566951    8325 out.go:177]   - MINIKUBE_LOCATION=19339
	I0729 10:50:47.567010    8325 notify.go:220] Checking for updates...
	I0729 10:50:47.574848    8325 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	I0729 10:50:47.577946    8325 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 10:50:47.581875    8325 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:50:47.584886    8325 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	I0729 10:50:47.587883    8325 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 10:50:47.591089    8325 config.go:182] Loaded profile config "kubernetes-upgrade-786000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0729 10:50:47.591362    8325 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:50:47.594843    8325 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 10:50:47.601854    8325 start.go:297] selected driver: qemu2
	I0729 10:50:47.601862    8325 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-786000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-786000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:50:47.601923    8325 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:50:47.604168    8325 cni.go:84] Creating CNI manager for ""
	I0729 10:50:47.604186    8325 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 10:50:47.604216    8325 start.go:340] cluster config:
	{Name:kubernetes-upgrade-786000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-786000 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:50:47.607680    8325 iso.go:125] acquiring lock: {Name:mk2808e0b9510c77af2c0862d3450f3cc996acba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:50:47.614879    8325 out.go:177] * Starting "kubernetes-upgrade-786000" primary control-plane node in "kubernetes-upgrade-786000" cluster
	I0729 10:50:47.618879    8325 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0729 10:50:47.618902    8325 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0729 10:50:47.618913    8325 cache.go:56] Caching tarball of preloaded images
	I0729 10:50:47.618977    8325 preload.go:172] Found /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 10:50:47.618983    8325 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0729 10:50:47.619042    8325 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/kubernetes-upgrade-786000/config.json ...
	I0729 10:50:47.619477    8325 start.go:360] acquireMachinesLock for kubernetes-upgrade-786000: {Name:mk9e50f97d2386769ca02400541d9347efa17b63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:50:47.619509    8325 start.go:364] duration metric: took 26.625µs to acquireMachinesLock for "kubernetes-upgrade-786000"
	I0729 10:50:47.619518    8325 start.go:96] Skipping create...Using existing machine configuration
	I0729 10:50:47.619524    8325 fix.go:54] fixHost starting: 
	I0729 10:50:47.619630    8325 fix.go:112] recreateIfNeeded on kubernetes-upgrade-786000: state=Stopped err=<nil>
	W0729 10:50:47.619638    8325 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 10:50:47.626876    8325 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-786000" ...
	I0729 10:50:47.630838    8325 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:50:47.630874    8325 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/kubernetes-upgrade-786000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/kubernetes-upgrade-786000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/kubernetes-upgrade-786000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:c5:30:ff:82:20 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/kubernetes-upgrade-786000/disk.qcow2
	I0729 10:50:47.632856    8325 main.go:141] libmachine: STDOUT: 
	I0729 10:50:47.632877    8325 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:50:47.632906    8325 fix.go:56] duration metric: took 13.38225ms for fixHost
	I0729 10:50:47.632910    8325 start.go:83] releasing machines lock for "kubernetes-upgrade-786000", held for 13.397625ms
	W0729 10:50:47.632922    8325 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 10:50:47.632954    8325 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:50:47.632959    8325 start.go:729] Will try again in 5 seconds ...
	I0729 10:50:52.635075    8325 start.go:360] acquireMachinesLock for kubernetes-upgrade-786000: {Name:mk9e50f97d2386769ca02400541d9347efa17b63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:50:52.635584    8325 start.go:364] duration metric: took 404.541µs to acquireMachinesLock for "kubernetes-upgrade-786000"
	I0729 10:50:52.635743    8325 start.go:96] Skipping create...Using existing machine configuration
	I0729 10:50:52.635763    8325 fix.go:54] fixHost starting: 
	I0729 10:50:52.636505    8325 fix.go:112] recreateIfNeeded on kubernetes-upgrade-786000: state=Stopped err=<nil>
	W0729 10:50:52.636534    8325 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 10:50:52.646270    8325 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-786000" ...
	I0729 10:50:52.650182    8325 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:50:52.650488    8325 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/kubernetes-upgrade-786000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/kubernetes-upgrade-786000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/kubernetes-upgrade-786000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:c5:30:ff:82:20 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/kubernetes-upgrade-786000/disk.qcow2
	I0729 10:50:52.660348    8325 main.go:141] libmachine: STDOUT: 
	I0729 10:50:52.660403    8325 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:50:52.660495    8325 fix.go:56] duration metric: took 24.733625ms for fixHost
	I0729 10:50:52.660509    8325 start.go:83] releasing machines lock for "kubernetes-upgrade-786000", held for 24.902375ms
	W0729 10:50:52.660676    8325 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-786000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-786000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:50:52.669225    8325 out.go:177] 
	W0729 10:50:52.672395    8325 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 10:50:52.672417    8325 out.go:239] * 
	* 
	W0729 10:50:52.674837    8325 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 10:50:52.691913    8325 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-786000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-786000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-786000 version --output=json: exit status 1 (55.857875ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-786000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-07-29 10:50:52.76242 -0700 PDT m=+954.793879251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-786000 -n kubernetes-upgrade-786000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-786000 -n kubernetes-upgrade-786000: exit status 7 (32.872417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-786000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-786000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-786000
--- FAIL: TestKubernetesUpgrade (18.17s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.37s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19339
- KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1165987741/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.37s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.27s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19339
- KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1700120945/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (575.95s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.44887044 start -p stopped-upgrade-294000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.44887044 start -p stopped-upgrade-294000 --memory=2200 --vm-driver=qemu2 : (42.051182208s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.44887044 -p stopped-upgrade-294000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.44887044 -p stopped-upgrade-294000 stop: (12.101041542s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-294000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-294000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m41.681587125s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-294000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19339
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-294000" primary control-plane node in "stopped-upgrade-294000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-294000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:51:47.936503    8358 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:51:47.936681    8358 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:51:47.936685    8358 out.go:304] Setting ErrFile to fd 2...
	I0729 10:51:47.936689    8358 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:51:47.936856    8358 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 10:51:47.938224    8358 out.go:298] Setting JSON to false
	I0729 10:51:47.957715    8358 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4876,"bootTime":1722270631,"procs":452,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 10:51:47.957793    8358 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 10:51:47.962725    8358 out.go:177] * [stopped-upgrade-294000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 10:51:47.969644    8358 out.go:177]   - MINIKUBE_LOCATION=19339
	I0729 10:51:47.969704    8358 notify.go:220] Checking for updates...
	I0729 10:51:47.977511    8358 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	I0729 10:51:47.980622    8358 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 10:51:47.983652    8358 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:51:47.986649    8358 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	I0729 10:51:47.989661    8358 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 10:51:47.993104    8358 config.go:182] Loaded profile config "stopped-upgrade-294000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 10:51:47.996581    8358 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 10:51:47.999625    8358 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:51:48.003662    8358 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 10:51:48.010584    8358 start.go:297] selected driver: qemu2
	I0729 10:51:48.010592    8358 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-294000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51474 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-294000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0729 10:51:48.010644    8358 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:51:48.013616    8358 cni.go:84] Creating CNI manager for ""
	I0729 10:51:48.013633    8358 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 10:51:48.013665    8358 start.go:340] cluster config:
	{Name:stopped-upgrade-294000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51474 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-294000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0729 10:51:48.013722    8358 iso.go:125] acquiring lock: {Name:mk2808e0b9510c77af2c0862d3450f3cc996acba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:51:48.021607    8358 out.go:177] * Starting "stopped-upgrade-294000" primary control-plane node in "stopped-upgrade-294000" cluster
	I0729 10:51:48.025614    8358 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0729 10:51:48.025634    8358 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0729 10:51:48.025645    8358 cache.go:56] Caching tarball of preloaded images
	I0729 10:51:48.025708    8358 preload.go:172] Found /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 10:51:48.025715    8358 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0729 10:51:48.025778    8358 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/stopped-upgrade-294000/config.json ...
	I0729 10:51:48.026290    8358 start.go:360] acquireMachinesLock for stopped-upgrade-294000: {Name:mk9e50f97d2386769ca02400541d9347efa17b63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:51:48.026327    8358 start.go:364] duration metric: took 29.792µs to acquireMachinesLock for "stopped-upgrade-294000"
	I0729 10:51:48.026338    8358 start.go:96] Skipping create...Using existing machine configuration
	I0729 10:51:48.026345    8358 fix.go:54] fixHost starting: 
	I0729 10:51:48.026478    8358 fix.go:112] recreateIfNeeded on stopped-upgrade-294000: state=Stopped err=<nil>
	W0729 10:51:48.026487    8358 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 10:51:48.030611    8358 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-294000" ...
	I0729 10:51:48.038425    8358 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:51:48.038502    8358 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/stopped-upgrade-294000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/stopped-upgrade-294000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/stopped-upgrade-294000/qemu.pid -nic user,model=virtio,hostfwd=tcp::51439-:22,hostfwd=tcp::51440-:2376,hostname=stopped-upgrade-294000 -daemonize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/stopped-upgrade-294000/disk.qcow2
	I0729 10:51:48.089690    8358 main.go:141] libmachine: STDOUT: 
	I0729 10:51:48.089708    8358 main.go:141] libmachine: STDERR: 
	I0729 10:51:48.089713    8358 main.go:141] libmachine: Waiting for VM to start (ssh -p 51439 docker@127.0.0.1)...
	I0729 10:52:07.911175    8358 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/stopped-upgrade-294000/config.json ...
	I0729 10:52:07.911552    8358 machine.go:94] provisionDockerMachine start ...
	I0729 10:52:07.911629    8358 main.go:141] libmachine: Using SSH client type: native
	I0729 10:52:07.911841    8358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100d2ea10] 0x100d31270 <nil>  [] 0s} localhost 51439 <nil> <nil>}
	I0729 10:52:07.911849    8358 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 10:52:07.978332    8358 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 10:52:07.978350    8358 buildroot.go:166] provisioning hostname "stopped-upgrade-294000"
	I0729 10:52:07.978396    8358 main.go:141] libmachine: Using SSH client type: native
	I0729 10:52:07.978518    8358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100d2ea10] 0x100d31270 <nil>  [] 0s} localhost 51439 <nil> <nil>}
	I0729 10:52:07.978529    8358 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-294000 && echo "stopped-upgrade-294000" | sudo tee /etc/hostname
	I0729 10:52:08.040506    8358 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-294000
	
	I0729 10:52:08.040569    8358 main.go:141] libmachine: Using SSH client type: native
	I0729 10:52:08.040701    8358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100d2ea10] 0x100d31270 <nil>  [] 0s} localhost 51439 <nil> <nil>}
	I0729 10:52:08.040710    8358 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-294000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-294000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-294000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 10:52:08.100281    8358 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 10:52:08.100294    8358 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19339-6071/.minikube CaCertPath:/Users/jenkins/minikube-integration/19339-6071/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19339-6071/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19339-6071/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19339-6071/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19339-6071/.minikube}
	I0729 10:52:08.100315    8358 buildroot.go:174] setting up certificates
	I0729 10:52:08.100320    8358 provision.go:84] configureAuth start
	I0729 10:52:08.100328    8358 provision.go:143] copyHostCerts
	I0729 10:52:08.100398    8358 exec_runner.go:144] found /Users/jenkins/minikube-integration/19339-6071/.minikube/ca.pem, removing ...
	I0729 10:52:08.100405    8358 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19339-6071/.minikube/ca.pem
	I0729 10:52:08.100784    8358 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19339-6071/.minikube/ca.pem (1078 bytes)
	I0729 10:52:08.100964    8358 exec_runner.go:144] found /Users/jenkins/minikube-integration/19339-6071/.minikube/cert.pem, removing ...
	I0729 10:52:08.100971    8358 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19339-6071/.minikube/cert.pem
	I0729 10:52:08.101016    8358 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19339-6071/.minikube/cert.pem (1123 bytes)
	I0729 10:52:08.101109    8358 exec_runner.go:144] found /Users/jenkins/minikube-integration/19339-6071/.minikube/key.pem, removing ...
	I0729 10:52:08.101112    8358 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19339-6071/.minikube/key.pem
	I0729 10:52:08.101154    8358 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19339-6071/.minikube/key.pem (1675 bytes)
	I0729 10:52:08.101237    8358 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19339-6071/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19339-6071/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-294000 san=[127.0.0.1 localhost minikube stopped-upgrade-294000]
	I0729 10:52:08.238705    8358 provision.go:177] copyRemoteCerts
	I0729 10:52:08.238765    8358 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 10:52:08.238774    8358 sshutil.go:53] new ssh client: &{IP:localhost Port:51439 SSHKeyPath:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/stopped-upgrade-294000/id_rsa Username:docker}
	I0729 10:52:08.270023    8358 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 10:52:08.278014    8358 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0729 10:52:08.285914    8358 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 10:52:08.293952    8358 provision.go:87] duration metric: took 193.628125ms to configureAuth
	I0729 10:52:08.293964    8358 buildroot.go:189] setting minikube options for container-runtime
	I0729 10:52:08.294111    8358 config.go:182] Loaded profile config "stopped-upgrade-294000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 10:52:08.294148    8358 main.go:141] libmachine: Using SSH client type: native
	I0729 10:52:08.294243    8358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100d2ea10] 0x100d31270 <nil>  [] 0s} localhost 51439 <nil> <nil>}
	I0729 10:52:08.294249    8358 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0729 10:52:08.354928    8358 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0729 10:52:08.354938    8358 buildroot.go:70] root file system type: tmpfs
	I0729 10:52:08.354988    8358 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0729 10:52:08.355034    8358 main.go:141] libmachine: Using SSH client type: native
	I0729 10:52:08.355139    8358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100d2ea10] 0x100d31270 <nil>  [] 0s} localhost 51439 <nil> <nil>}
	I0729 10:52:08.355176    8358 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0729 10:52:08.418601    8358 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0729 10:52:08.418660    8358 main.go:141] libmachine: Using SSH client type: native
	I0729 10:52:08.418785    8358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100d2ea10] 0x100d31270 <nil>  [] 0s} localhost 51439 <nil> <nil>}
	I0729 10:52:08.418793    8358 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0729 10:52:08.794118    8358 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0729 10:52:08.794133    8358 machine.go:97] duration metric: took 882.58875ms to provisionDockerMachine
	I0729 10:52:08.794139    8358 start.go:293] postStartSetup for "stopped-upgrade-294000" (driver="qemu2")
	I0729 10:52:08.794145    8358 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 10:52:08.794214    8358 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 10:52:08.794225    8358 sshutil.go:53] new ssh client: &{IP:localhost Port:51439 SSHKeyPath:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/stopped-upgrade-294000/id_rsa Username:docker}
	I0729 10:52:08.828772    8358 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 10:52:08.829936    8358 info.go:137] Remote host: Buildroot 2021.02.12
	I0729 10:52:08.829943    8358 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19339-6071/.minikube/addons for local assets ...
	I0729 10:52:08.830020    8358 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19339-6071/.minikube/files for local assets ...
	I0729 10:52:08.830109    8358 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19339-6071/.minikube/files/etc/ssl/certs/65432.pem -> 65432.pem in /etc/ssl/certs
	I0729 10:52:08.830200    8358 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 10:52:08.833263    8358 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19339-6071/.minikube/files/etc/ssl/certs/65432.pem --> /etc/ssl/certs/65432.pem (1708 bytes)
	I0729 10:52:08.840191    8358 start.go:296] duration metric: took 46.048291ms for postStartSetup
	I0729 10:52:08.840206    8358 fix.go:56] duration metric: took 20.814213542s for fixHost
	I0729 10:52:08.840245    8358 main.go:141] libmachine: Using SSH client type: native
	I0729 10:52:08.840349    8358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100d2ea10] 0x100d31270 <nil>  [] 0s} localhost 51439 <nil> <nil>}
	I0729 10:52:08.840353    8358 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0729 10:52:08.899046    8358 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722275529.005586712
	
	I0729 10:52:08.899058    8358 fix.go:216] guest clock: 1722275529.005586712
	I0729 10:52:08.899062    8358 fix.go:229] Guest: 2024-07-29 10:52:09.005586712 -0700 PDT Remote: 2024-07-29 10:52:08.840208 -0700 PDT m=+20.935016751 (delta=165.378712ms)
	I0729 10:52:08.899074    8358 fix.go:200] guest clock delta is within tolerance: 165.378712ms
	I0729 10:52:08.899077    8358 start.go:83] releasing machines lock for "stopped-upgrade-294000", held for 20.87309625s
	I0729 10:52:08.899151    8358 ssh_runner.go:195] Run: cat /version.json
	I0729 10:52:08.899160    8358 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 10:52:08.899159    8358 sshutil.go:53] new ssh client: &{IP:localhost Port:51439 SSHKeyPath:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/stopped-upgrade-294000/id_rsa Username:docker}
	I0729 10:52:08.899179    8358 sshutil.go:53] new ssh client: &{IP:localhost Port:51439 SSHKeyPath:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/stopped-upgrade-294000/id_rsa Username:docker}
	W0729 10:52:08.899695    8358 sshutil.go:64] dial failure (will retry): dial tcp [::1]:51439: connect: connection refused
	I0729 10:52:08.899716    8358 retry.go:31] will retry after 195.722751ms: dial tcp [::1]:51439: connect: connection refused
	W0729 10:52:09.134267    8358 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0729 10:52:09.134350    8358 ssh_runner.go:195] Run: systemctl --version
	I0729 10:52:09.136915    8358 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 10:52:09.138895    8358 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 10:52:09.138926    8358 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0729 10:52:09.142521    8358 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0729 10:52:09.148146    8358 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 10:52:09.148161    8358 start.go:495] detecting cgroup driver to use...
	I0729 10:52:09.148244    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 10:52:09.155415    8358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0729 10:52:09.158639    8358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0729 10:52:09.161767    8358 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0729 10:52:09.161792    8358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0729 10:52:09.165119    8358 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0729 10:52:09.168374    8358 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0729 10:52:09.171181    8358 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0729 10:52:09.174058    8358 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 10:52:09.177434    8358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0729 10:52:09.180750    8358 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0729 10:52:09.183670    8358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0729 10:52:09.186379    8358 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 10:52:09.189570    8358 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 10:52:09.192788    8358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 10:52:09.274385    8358 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0729 10:52:09.280577    8358 start.go:495] detecting cgroup driver to use...
	I0729 10:52:09.280649    8358 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0729 10:52:09.286733    8358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 10:52:09.291603    8358 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 10:52:09.304770    8358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 10:52:09.309381    8358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0729 10:52:09.314099    8358 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0729 10:52:09.373258    8358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0729 10:52:09.378715    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 10:52:09.384403    8358 ssh_runner.go:195] Run: which cri-dockerd
	I0729 10:52:09.385638    8358 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0729 10:52:09.388439    8358 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0729 10:52:09.392729    8358 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0729 10:52:09.468830    8358 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0729 10:52:09.543159    8358 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0729 10:52:09.543225    8358 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0729 10:52:09.548270    8358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 10:52:09.624824    8358 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0729 10:52:10.776069    8358 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.151246583s)
	I0729 10:52:10.776122    8358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0729 10:52:10.783806    8358 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0729 10:52:10.789670    8358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0729 10:52:10.794469    8358 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0729 10:52:10.870749    8358 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0729 10:52:10.946683    8358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 10:52:11.026423    8358 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0729 10:52:11.032486    8358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0729 10:52:11.036713    8358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 10:52:11.111306    8358 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0729 10:52:11.153242    8358 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0729 10:52:11.153327    8358 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0729 10:52:11.155453    8358 start.go:563] Will wait 60s for crictl version
	I0729 10:52:11.155482    8358 ssh_runner.go:195] Run: which crictl
	I0729 10:52:11.156811    8358 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 10:52:11.170612    8358 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0729 10:52:11.170677    8358 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0729 10:52:11.186976    8358 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0729 10:52:11.207281    8358 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0729 10:52:11.207421    8358 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0729 10:52:11.208773    8358 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 10:52:11.212763    8358 kubeadm.go:883] updating cluster {Name:stopped-upgrade-294000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51474 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-294000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0729 10:52:11.212817    8358 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0729 10:52:11.212859    8358 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0729 10:52:11.223352    8358 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0729 10:52:11.223360    8358 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0729 10:52:11.223407    8358 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0729 10:52:11.226517    8358 ssh_runner.go:195] Run: which lz4
	I0729 10:52:11.227878    8358 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0729 10:52:11.229163    8358 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 10:52:11.229175    8358 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0729 10:52:12.154326    8358 docker.go:649] duration metric: took 926.501042ms to copy over tarball
	I0729 10:52:12.154384    8358 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 10:52:13.329464    8358 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.175084291s)
	I0729 10:52:13.329476    8358 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 10:52:13.345524    8358 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0729 10:52:13.348877    8358 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0729 10:52:13.353978    8358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 10:52:13.431846    8358 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0729 10:52:14.946015    8358 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.514178667s)
	I0729 10:52:14.946114    8358 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0729 10:52:14.960344    8358 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0729 10:52:14.960354    8358 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0729 10:52:14.960359    8358 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 10:52:14.965870    8358 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 10:52:14.967816    8358 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0729 10:52:14.970062    8358 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 10:52:14.970355    8358 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 10:52:14.971596    8358 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0729 10:52:14.971767    8358 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0729 10:52:14.971853    8358 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0729 10:52:14.973231    8358 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0729 10:52:14.973689    8358 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 10:52:14.975577    8358 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 10:52:14.975662    8358 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0729 10:52:14.975810    8358 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0729 10:52:14.977340    8358 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0729 10:52:14.977472    8358 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0729 10:52:14.978965    8358 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 10:52:14.979886    8358 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0729 10:52:15.386670    8358 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0729 10:52:15.386810    8358 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0729 10:52:15.396205    8358 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 10:52:15.396682    8358 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0729 10:52:15.400896    8358 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0729 10:52:15.400922    8358 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0729 10:52:15.400959    8358 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0729 10:52:15.403257    8358 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0729 10:52:15.403280    8358 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0729 10:52:15.403314    8358 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0729 10:52:15.419804    8358 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0729 10:52:15.426301    8358 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0729 10:52:15.426322    8358 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 10:52:15.426304    8358 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0729 10:52:15.426366    8358 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0729 10:52:15.426344    8358 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0729 10:52:15.426388    8358 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 10:52:15.426391    8358 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0729 10:52:15.430783    8358 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	W0729 10:52:15.431251    8358 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0729 10:52:15.431353    8358 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0729 10:52:15.435522    8358 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0729 10:52:15.435542    8358 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0729 10:52:15.435587    8358 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0729 10:52:15.449959    8358 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0729 10:52:15.455634    8358 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0729 10:52:15.455664    8358 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0729 10:52:15.455683    8358 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 10:52:15.455721    8358 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0729 10:52:15.455732    8358 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0729 10:52:15.455817    8358 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0729 10:52:15.466194    8358 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0729 10:52:15.466218    8358 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0729 10:52:15.466290    8358 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0729 10:52:15.466383    8358 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0729 10:52:15.467943    8358 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0729 10:52:15.467953    8358 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0729 10:52:15.470527    8358 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0729 10:52:15.483250    8358 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0729 10:52:15.483266    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0729 10:52:15.522738    8358 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0729 10:52:15.522760    8358 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0729 10:52:15.522816    8358 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0729 10:52:15.538803    8358 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0729 10:52:15.538821    8358 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0729 10:52:15.538827    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0729 10:52:15.539413    8358 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0729 10:52:15.539520    8358 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0729 10:52:15.577698    8358 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0729 10:52:15.577738    8358 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0729 10:52:15.577759    8358 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	W0729 10:52:15.623863    8358 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0729 10:52:15.623977    8358 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 10:52:15.653134    8358 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0729 10:52:15.653160    8358 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 10:52:15.653238    8358 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 10:52:15.686803    8358 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0729 10:52:15.686927    8358 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0729 10:52:15.700690    8358 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0729 10:52:15.700719    8358 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0729 10:52:15.764235    8358 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0729 10:52:15.764253    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0729 10:52:16.142390    8358 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0729 10:52:16.142418    8358 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0729 10:52:16.142460    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0729 10:52:16.296176    8358 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0729 10:52:16.296216    8358 cache_images.go:92] duration metric: took 1.335872667s to LoadCachedImages
	W0729 10:52:16.296264    8358 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0729 10:52:16.296272    8358 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0729 10:52:16.296326    8358 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-294000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-294000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 10:52:16.296389    8358 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0729 10:52:16.309893    8358 cni.go:84] Creating CNI manager for ""
	I0729 10:52:16.309907    8358 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 10:52:16.309915    8358 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 10:52:16.309924    8358 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-294000 NodeName:stopped-upgrade-294000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 10:52:16.309990    8358 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-294000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 10:52:16.310048    8358 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0729 10:52:16.313481    8358 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 10:52:16.313509    8358 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 10:52:16.316529    8358 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0729 10:52:16.321746    8358 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 10:52:16.326984    8358 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0729 10:52:16.332029    8358 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0729 10:52:16.333255    8358 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 10:52:16.337222    8358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 10:52:16.419900    8358 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 10:52:16.425057    8358 certs.go:68] Setting up /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/stopped-upgrade-294000 for IP: 10.0.2.15
	I0729 10:52:16.425069    8358 certs.go:194] generating shared ca certs ...
	I0729 10:52:16.425077    8358 certs.go:226] acquiring lock for ca certs: {Name:mkd86fdb55ccc20c129297fd51f66c0e2f8e203c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:52:16.425255    8358 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19339-6071/.minikube/ca.key
	I0729 10:52:16.425305    8358 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19339-6071/.minikube/proxy-client-ca.key
	I0729 10:52:16.425312    8358 certs.go:256] generating profile certs ...
	I0729 10:52:16.425391    8358 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/stopped-upgrade-294000/client.key
	I0729 10:52:16.425416    8358 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/stopped-upgrade-294000/apiserver.key.31b6761a
	I0729 10:52:16.425428    8358 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/stopped-upgrade-294000/apiserver.crt.31b6761a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0729 10:52:16.528637    8358 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/stopped-upgrade-294000/apiserver.crt.31b6761a ...
	I0729 10:52:16.528650    8358 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/stopped-upgrade-294000/apiserver.crt.31b6761a: {Name:mkf96fc44bc0a8ea540ede29386cc4783d1d43aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:52:16.528972    8358 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/stopped-upgrade-294000/apiserver.key.31b6761a ...
	I0729 10:52:16.528977    8358 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/stopped-upgrade-294000/apiserver.key.31b6761a: {Name:mka64950b9c5d8430ac7b24db40a506627f9be36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:52:16.529116    8358 certs.go:381] copying /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/stopped-upgrade-294000/apiserver.crt.31b6761a -> /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/stopped-upgrade-294000/apiserver.crt
	I0729 10:52:16.529241    8358 certs.go:385] copying /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/stopped-upgrade-294000/apiserver.key.31b6761a -> /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/stopped-upgrade-294000/apiserver.key
	I0729 10:52:16.529386    8358 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/stopped-upgrade-294000/proxy-client.key
	I0729 10:52:16.529519    8358 certs.go:484] found cert: /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/6543.pem (1338 bytes)
	W0729 10:52:16.529548    8358 certs.go:480] ignoring /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/6543_empty.pem, impossibly tiny 0 bytes
	I0729 10:52:16.529553    8358 certs.go:484] found cert: /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 10:52:16.529583    8358 certs.go:484] found cert: /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/ca.pem (1078 bytes)
	I0729 10:52:16.529614    8358 certs.go:484] found cert: /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/cert.pem (1123 bytes)
	I0729 10:52:16.529642    8358 certs.go:484] found cert: /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/key.pem (1675 bytes)
	I0729 10:52:16.529700    8358 certs.go:484] found cert: /Users/jenkins/minikube-integration/19339-6071/.minikube/files/etc/ssl/certs/65432.pem (1708 bytes)
	I0729 10:52:16.530046    8358 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19339-6071/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 10:52:16.537514    8358 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19339-6071/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 10:52:16.545285    8358 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19339-6071/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 10:52:16.552926    8358 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19339-6071/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 10:52:16.559632    8358 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/stopped-upgrade-294000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0729 10:52:16.566234    8358 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/stopped-upgrade-294000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 10:52:16.573505    8358 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/stopped-upgrade-294000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 10:52:16.581082    8358 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/stopped-upgrade-294000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 10:52:16.588463    8358 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19339-6071/.minikube/files/etc/ssl/certs/65432.pem --> /usr/share/ca-certificates/65432.pem (1708 bytes)
	I0729 10:52:16.595190    8358 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19339-6071/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 10:52:16.601796    8358 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/6543.pem --> /usr/share/ca-certificates/6543.pem (1338 bytes)
	I0729 10:52:16.609068    8358 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 10:52:16.614209    8358 ssh_runner.go:195] Run: openssl version
	I0729 10:52:16.616071    8358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 10:52:16.618853    8358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 10:52:16.620306    8358 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 17:48 /usr/share/ca-certificates/minikubeCA.pem
	I0729 10:52:16.620325    8358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 10:52:16.622006    8358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 10:52:16.625230    8358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6543.pem && ln -fs /usr/share/ca-certificates/6543.pem /etc/ssl/certs/6543.pem"
	I0729 10:52:16.628403    8358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6543.pem
	I0729 10:52:16.629692    8358 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 17:36 /usr/share/ca-certificates/6543.pem
	I0729 10:52:16.629713    8358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6543.pem
	I0729 10:52:16.631603    8358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6543.pem /etc/ssl/certs/51391683.0"
	I0729 10:52:16.634343    8358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/65432.pem && ln -fs /usr/share/ca-certificates/65432.pem /etc/ssl/certs/65432.pem"
	I0729 10:52:16.637581    8358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/65432.pem
	I0729 10:52:16.639076    8358 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 17:36 /usr/share/ca-certificates/65432.pem
	I0729 10:52:16.639102    8358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/65432.pem
	I0729 10:52:16.640896    8358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/65432.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 10:52:16.643767    8358 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 10:52:16.645138    8358 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 10:52:16.647369    8358 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 10:52:16.649183    8358 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 10:52:16.651230    8358 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 10:52:16.652914    8358 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 10:52:16.654645    8358 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 10:52:16.656676    8358 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-294000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51474 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-294000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0729 10:52:16.656742    8358 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0729 10:52:16.666784    8358 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 10:52:16.670083    8358 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 10:52:16.670088    8358 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 10:52:16.670109    8358 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 10:52:16.672984    8358 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 10:52:16.673291    8358 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-294000" does not appear in /Users/jenkins/minikube-integration/19339-6071/kubeconfig
	I0729 10:52:16.673391    8358 kubeconfig.go:62] /Users/jenkins/minikube-integration/19339-6071/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-294000" cluster setting kubeconfig missing "stopped-upgrade-294000" context setting]
	I0729 10:52:16.673586    8358 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19339-6071/kubeconfig: {Name:mkf75fdff2d3e918223b7f2dbeb4359c01007a16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:52:16.674046    8358 kapi.go:59] client config for stopped-upgrade-294000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/stopped-upgrade-294000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/stopped-upgrade-294000/client.key", CAFile:"/Users/jenkins/minikube-integration/19339-6071/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1020c4080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 10:52:16.674386    8358 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 10:52:16.677168    8358 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-294000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0729 10:52:16.677174    8358 kubeadm.go:1160] stopping kube-system containers ...
	I0729 10:52:16.677213    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0729 10:52:16.688114    8358 docker.go:483] Stopping containers: [a9a637b09ebc 81df750d149b bb4196aefa69 4494551802a6 2afc138a6e36 734c1aa632b5 07079e9404aa d6f86f1633f4]
	I0729 10:52:16.688175    8358 ssh_runner.go:195] Run: docker stop a9a637b09ebc 81df750d149b bb4196aefa69 4494551802a6 2afc138a6e36 734c1aa632b5 07079e9404aa d6f86f1633f4
	I0729 10:52:16.698803    8358 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 10:52:16.704569    8358 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 10:52:16.707466    8358 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 10:52:16.707472    8358 kubeadm.go:157] found existing configuration files:
	
	I0729 10:52:16.707492    8358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51474 /etc/kubernetes/admin.conf
	I0729 10:52:16.710022    8358 kubeadm.go:163] "https://control-plane.minikube.internal:51474" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51474 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 10:52:16.710049    8358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 10:52:16.712896    8358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51474 /etc/kubernetes/kubelet.conf
	I0729 10:52:16.715412    8358 kubeadm.go:163] "https://control-plane.minikube.internal:51474" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51474 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 10:52:16.715436    8358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 10:52:16.718077    8358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51474 /etc/kubernetes/controller-manager.conf
	I0729 10:52:16.722048    8358 kubeadm.go:163] "https://control-plane.minikube.internal:51474" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51474 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 10:52:16.722070    8358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 10:52:16.725192    8358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51474 /etc/kubernetes/scheduler.conf
	I0729 10:52:16.728017    8358 kubeadm.go:163] "https://control-plane.minikube.internal:51474" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51474 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 10:52:16.728039    8358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 10:52:16.730706    8358 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 10:52:16.733976    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 10:52:16.756526    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 10:52:17.295983    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 10:52:17.430233    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 10:52:17.449202    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 10:52:17.471908    8358 api_server.go:52] waiting for apiserver process to appear ...
	I0729 10:52:17.471981    8358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 10:52:17.973346    8358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 10:52:18.474031    8358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 10:52:18.478415    8358 api_server.go:72] duration metric: took 1.00652525s to wait for apiserver process to appear ...
	I0729 10:52:18.478424    8358 api_server.go:88] waiting for apiserver healthz status ...
	I0729 10:52:18.478434    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:52:23.479286    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:52:23.479369    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:52:28.480357    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:52:28.480419    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:52:33.480628    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:52:33.480676    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:52:38.481029    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:52:38.481063    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:52:43.481522    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:52:43.481585    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:52:48.482183    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:52:48.482201    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:52:53.482954    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:52:53.483031    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:52:58.484816    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:52:58.484845    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:53:03.486261    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:53:03.486305    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:53:08.486708    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:53:08.486787    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:53:13.489315    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:53:13.489381    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:53:18.491764    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:53:18.491885    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:53:18.507711    8358 logs.go:276] 2 containers: [14a4ff9d95a0 2afc138a6e36]
	I0729 10:53:18.507800    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:53:18.520240    8358 logs.go:276] 2 containers: [68a6d48feaae 4494551802a6]
	I0729 10:53:18.520309    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:53:18.531563    8358 logs.go:276] 1 containers: [fbff0b09b3af]
	I0729 10:53:18.531632    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:53:18.541665    8358 logs.go:276] 2 containers: [8ad4866a16b3 a9a637b09ebc]
	I0729 10:53:18.541734    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:53:18.551962    8358 logs.go:276] 1 containers: [cc82106fc9da]
	I0729 10:53:18.552037    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:53:18.563288    8358 logs.go:276] 2 containers: [eee01c406f30 81df750d149b]
	I0729 10:53:18.563357    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:53:18.573310    8358 logs.go:276] 0 containers: []
	W0729 10:53:18.573323    8358 logs.go:278] No container was found matching "kindnet"
	I0729 10:53:18.573377    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:53:18.583967    8358 logs.go:276] 2 containers: [7610cd881aeb e2e7fb6e4b2d]
	I0729 10:53:18.583989    8358 logs.go:123] Gathering logs for dmesg ...
	I0729 10:53:18.583995    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:53:18.592090    8358 logs.go:123] Gathering logs for Docker ...
	I0729 10:53:18.592100    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:53:18.615752    8358 logs.go:123] Gathering logs for kubelet ...
	I0729 10:53:18.615764    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:53:18.651908    8358 logs.go:123] Gathering logs for coredns [fbff0b09b3af] ...
	I0729 10:53:18.651917    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff0b09b3af"
	I0729 10:53:18.663265    8358 logs.go:123] Gathering logs for kube-scheduler [a9a637b09ebc] ...
	I0729 10:53:18.663276    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9a637b09ebc"
	I0729 10:53:18.680733    8358 logs.go:123] Gathering logs for kube-controller-manager [81df750d149b] ...
	I0729 10:53:18.680742    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81df750d149b"
	I0729 10:53:18.694923    8358 logs.go:123] Gathering logs for storage-provisioner [e2e7fb6e4b2d] ...
	I0729 10:53:18.694934    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2e7fb6e4b2d"
	I0729 10:53:18.706650    8358 logs.go:123] Gathering logs for container status ...
	I0729 10:53:18.706661    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:53:18.718966    8358 logs.go:123] Gathering logs for kube-apiserver [14a4ff9d95a0] ...
	I0729 10:53:18.718977    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14a4ff9d95a0"
	I0729 10:53:18.733141    8358 logs.go:123] Gathering logs for etcd [4494551802a6] ...
	I0729 10:53:18.733153    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4494551802a6"
	I0729 10:53:18.748478    8358 logs.go:123] Gathering logs for kube-scheduler [8ad4866a16b3] ...
	I0729 10:53:18.748489    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ad4866a16b3"
	I0729 10:53:18.760278    8358 logs.go:123] Gathering logs for kube-proxy [cc82106fc9da] ...
	I0729 10:53:18.760288    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc82106fc9da"
	I0729 10:53:18.778820    8358 logs.go:123] Gathering logs for storage-provisioner [7610cd881aeb] ...
	I0729 10:53:18.778830    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7610cd881aeb"
	I0729 10:53:18.790710    8358 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:53:18.790720    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:53:18.891934    8358 logs.go:123] Gathering logs for kube-apiserver [2afc138a6e36] ...
	I0729 10:53:18.891946    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2afc138a6e36"
	I0729 10:53:18.932681    8358 logs.go:123] Gathering logs for etcd [68a6d48feaae] ...
	I0729 10:53:18.932691    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68a6d48feaae"
	I0729 10:53:18.946573    8358 logs.go:123] Gathering logs for kube-controller-manager [eee01c406f30] ...
	I0729 10:53:18.946584    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eee01c406f30"
	I0729 10:53:21.465747    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:53:26.468422    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:53:26.468804    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:53:26.502304    8358 logs.go:276] 2 containers: [14a4ff9d95a0 2afc138a6e36]
	I0729 10:53:26.502451    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:53:26.522758    8358 logs.go:276] 2 containers: [68a6d48feaae 4494551802a6]
	I0729 10:53:26.522895    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:53:26.537581    8358 logs.go:276] 1 containers: [fbff0b09b3af]
	I0729 10:53:26.537668    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:53:26.553451    8358 logs.go:276] 2 containers: [8ad4866a16b3 a9a637b09ebc]
	I0729 10:53:26.553525    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:53:26.567164    8358 logs.go:276] 1 containers: [cc82106fc9da]
	I0729 10:53:26.567236    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:53:26.580803    8358 logs.go:276] 2 containers: [eee01c406f30 81df750d149b]
	I0729 10:53:26.580872    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:53:26.591346    8358 logs.go:276] 0 containers: []
	W0729 10:53:26.591357    8358 logs.go:278] No container was found matching "kindnet"
	I0729 10:53:26.591416    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:53:26.601728    8358 logs.go:276] 2 containers: [7610cd881aeb e2e7fb6e4b2d]
	I0729 10:53:26.601746    8358 logs.go:123] Gathering logs for kube-controller-manager [eee01c406f30] ...
	I0729 10:53:26.601752    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eee01c406f30"
	I0729 10:53:26.620923    8358 logs.go:123] Gathering logs for storage-provisioner [7610cd881aeb] ...
	I0729 10:53:26.620933    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7610cd881aeb"
	I0729 10:53:26.632343    8358 logs.go:123] Gathering logs for storage-provisioner [e2e7fb6e4b2d] ...
	I0729 10:53:26.632355    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2e7fb6e4b2d"
	I0729 10:53:26.647990    8358 logs.go:123] Gathering logs for kube-controller-manager [81df750d149b] ...
	I0729 10:53:26.647999    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81df750d149b"
	I0729 10:53:26.661292    8358 logs.go:123] Gathering logs for Docker ...
	I0729 10:53:26.661303    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:53:26.685099    8358 logs.go:123] Gathering logs for dmesg ...
	I0729 10:53:26.685107    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:53:26.689566    8358 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:53:26.689573    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:53:26.725355    8358 logs.go:123] Gathering logs for etcd [68a6d48feaae] ...
	I0729 10:53:26.725368    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68a6d48feaae"
	I0729 10:53:26.738958    8358 logs.go:123] Gathering logs for coredns [fbff0b09b3af] ...
	I0729 10:53:26.738970    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff0b09b3af"
	I0729 10:53:26.750583    8358 logs.go:123] Gathering logs for kube-scheduler [a9a637b09ebc] ...
	I0729 10:53:26.750593    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9a637b09ebc"
	I0729 10:53:26.766192    8358 logs.go:123] Gathering logs for kube-proxy [cc82106fc9da] ...
	I0729 10:53:26.766203    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc82106fc9da"
	I0729 10:53:26.777808    8358 logs.go:123] Gathering logs for container status ...
	I0729 10:53:26.777822    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:53:26.790818    8358 logs.go:123] Gathering logs for kube-apiserver [14a4ff9d95a0] ...
	I0729 10:53:26.790832    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14a4ff9d95a0"
	I0729 10:53:26.805127    8358 logs.go:123] Gathering logs for kube-apiserver [2afc138a6e36] ...
	I0729 10:53:26.805140    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2afc138a6e36"
	I0729 10:53:26.843714    8358 logs.go:123] Gathering logs for kube-scheduler [8ad4866a16b3] ...
	I0729 10:53:26.843724    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ad4866a16b3"
	I0729 10:53:26.855831    8358 logs.go:123] Gathering logs for kubelet ...
	I0729 10:53:26.855845    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:53:26.893091    8358 logs.go:123] Gathering logs for etcd [4494551802a6] ...
	I0729 10:53:26.893101    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4494551802a6"
	I0729 10:53:29.408906    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:53:34.411237    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:53:34.411559    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:53:34.443739    8358 logs.go:276] 2 containers: [14a4ff9d95a0 2afc138a6e36]
	I0729 10:53:34.443909    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:53:34.467363    8358 logs.go:276] 2 containers: [68a6d48feaae 4494551802a6]
	I0729 10:53:34.467483    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:53:34.483504    8358 logs.go:276] 1 containers: [fbff0b09b3af]
	I0729 10:53:34.483587    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:53:34.496606    8358 logs.go:276] 2 containers: [8ad4866a16b3 a9a637b09ebc]
	I0729 10:53:34.496676    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:53:34.507574    8358 logs.go:276] 1 containers: [cc82106fc9da]
	I0729 10:53:34.507641    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:53:34.518213    8358 logs.go:276] 2 containers: [eee01c406f30 81df750d149b]
	I0729 10:53:34.518276    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:53:34.528747    8358 logs.go:276] 0 containers: []
	W0729 10:53:34.528759    8358 logs.go:278] No container was found matching "kindnet"
	I0729 10:53:34.528809    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:53:34.538856    8358 logs.go:276] 2 containers: [7610cd881aeb e2e7fb6e4b2d]
	I0729 10:53:34.538876    8358 logs.go:123] Gathering logs for Docker ...
	I0729 10:53:34.538881    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:53:34.562619    8358 logs.go:123] Gathering logs for storage-provisioner [7610cd881aeb] ...
	I0729 10:53:34.562631    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7610cd881aeb"
	I0729 10:53:34.575533    8358 logs.go:123] Gathering logs for container status ...
	I0729 10:53:34.575545    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:53:34.590235    8358 logs.go:123] Gathering logs for kubelet ...
	I0729 10:53:34.590245    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:53:34.626297    8358 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:53:34.626306    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:53:34.662941    8358 logs.go:123] Gathering logs for etcd [68a6d48feaae] ...
	I0729 10:53:34.662954    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68a6d48feaae"
	I0729 10:53:34.677426    8358 logs.go:123] Gathering logs for coredns [fbff0b09b3af] ...
	I0729 10:53:34.677436    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff0b09b3af"
	I0729 10:53:34.688415    8358 logs.go:123] Gathering logs for kube-proxy [cc82106fc9da] ...
	I0729 10:53:34.688428    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc82106fc9da"
	I0729 10:53:34.701982    8358 logs.go:123] Gathering logs for kube-controller-manager [eee01c406f30] ...
	I0729 10:53:34.701993    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eee01c406f30"
	I0729 10:53:34.720065    8358 logs.go:123] Gathering logs for dmesg ...
	I0729 10:53:34.720080    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:53:34.724253    8358 logs.go:123] Gathering logs for kube-apiserver [14a4ff9d95a0] ...
	I0729 10:53:34.724260    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14a4ff9d95a0"
	I0729 10:53:34.739077    8358 logs.go:123] Gathering logs for kube-apiserver [2afc138a6e36] ...
	I0729 10:53:34.739088    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2afc138a6e36"
	I0729 10:53:34.777669    8358 logs.go:123] Gathering logs for etcd [4494551802a6] ...
	I0729 10:53:34.777680    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4494551802a6"
	I0729 10:53:34.791995    8358 logs.go:123] Gathering logs for kube-scheduler [a9a637b09ebc] ...
	I0729 10:53:34.792006    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9a637b09ebc"
	I0729 10:53:34.807335    8358 logs.go:123] Gathering logs for kube-scheduler [8ad4866a16b3] ...
	I0729 10:53:34.807346    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ad4866a16b3"
	I0729 10:53:34.819509    8358 logs.go:123] Gathering logs for kube-controller-manager [81df750d149b] ...
	I0729 10:53:34.819521    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81df750d149b"
	I0729 10:53:34.833171    8358 logs.go:123] Gathering logs for storage-provisioner [e2e7fb6e4b2d] ...
	I0729 10:53:34.833182    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2e7fb6e4b2d"
	I0729 10:53:37.346793    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:53:42.349072    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:53:42.349227    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:53:42.362055    8358 logs.go:276] 2 containers: [14a4ff9d95a0 2afc138a6e36]
	I0729 10:53:42.362140    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:53:42.372644    8358 logs.go:276] 2 containers: [68a6d48feaae 4494551802a6]
	I0729 10:53:42.372722    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:53:42.383090    8358 logs.go:276] 1 containers: [fbff0b09b3af]
	I0729 10:53:42.383158    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:53:42.393892    8358 logs.go:276] 2 containers: [8ad4866a16b3 a9a637b09ebc]
	I0729 10:53:42.393966    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:53:42.404305    8358 logs.go:276] 1 containers: [cc82106fc9da]
	I0729 10:53:42.404382    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:53:42.415013    8358 logs.go:276] 2 containers: [eee01c406f30 81df750d149b]
	I0729 10:53:42.415079    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:53:42.425499    8358 logs.go:276] 0 containers: []
	W0729 10:53:42.425514    8358 logs.go:278] No container was found matching "kindnet"
	I0729 10:53:42.425577    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:53:42.435781    8358 logs.go:276] 2 containers: [7610cd881aeb e2e7fb6e4b2d]
	I0729 10:53:42.435798    8358 logs.go:123] Gathering logs for kube-controller-manager [eee01c406f30] ...
	I0729 10:53:42.435804    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eee01c406f30"
	I0729 10:53:42.453070    8358 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:53:42.453083    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:53:42.488466    8358 logs.go:123] Gathering logs for etcd [68a6d48feaae] ...
	I0729 10:53:42.488480    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68a6d48feaae"
	I0729 10:53:42.502050    8358 logs.go:123] Gathering logs for etcd [4494551802a6] ...
	I0729 10:53:42.502061    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4494551802a6"
	I0729 10:53:42.516480    8358 logs.go:123] Gathering logs for Docker ...
	I0729 10:53:42.516492    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:53:42.541232    8358 logs.go:123] Gathering logs for kubelet ...
	I0729 10:53:42.541240    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:53:42.579306    8358 logs.go:123] Gathering logs for kube-apiserver [14a4ff9d95a0] ...
	I0729 10:53:42.579316    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14a4ff9d95a0"
	I0729 10:53:42.593114    8358 logs.go:123] Gathering logs for storage-provisioner [7610cd881aeb] ...
	I0729 10:53:42.593127    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7610cd881aeb"
	I0729 10:53:42.604357    8358 logs.go:123] Gathering logs for kube-controller-manager [81df750d149b] ...
	I0729 10:53:42.604365    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81df750d149b"
	I0729 10:53:42.617576    8358 logs.go:123] Gathering logs for storage-provisioner [e2e7fb6e4b2d] ...
	I0729 10:53:42.617586    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2e7fb6e4b2d"
	I0729 10:53:42.628678    8358 logs.go:123] Gathering logs for kube-scheduler [8ad4866a16b3] ...
	I0729 10:53:42.628691    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ad4866a16b3"
	I0729 10:53:42.644377    8358 logs.go:123] Gathering logs for kube-scheduler [a9a637b09ebc] ...
	I0729 10:53:42.644388    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9a637b09ebc"
	I0729 10:53:42.663603    8358 logs.go:123] Gathering logs for kube-proxy [cc82106fc9da] ...
	I0729 10:53:42.663620    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc82106fc9da"
	I0729 10:53:42.675430    8358 logs.go:123] Gathering logs for container status ...
	I0729 10:53:42.675442    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:53:42.687184    8358 logs.go:123] Gathering logs for dmesg ...
	I0729 10:53:42.687198    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:53:42.691098    8358 logs.go:123] Gathering logs for kube-apiserver [2afc138a6e36] ...
	I0729 10:53:42.691105    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2afc138a6e36"
	I0729 10:53:42.736824    8358 logs.go:123] Gathering logs for coredns [fbff0b09b3af] ...
	I0729 10:53:42.736837    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff0b09b3af"
	I0729 10:53:45.251479    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:53:50.253720    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:53:50.253865    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:53:50.273432    8358 logs.go:276] 2 containers: [14a4ff9d95a0 2afc138a6e36]
	I0729 10:53:50.273520    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:53:50.288569    8358 logs.go:276] 2 containers: [68a6d48feaae 4494551802a6]
	I0729 10:53:50.288650    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:53:50.300785    8358 logs.go:276] 1 containers: [fbff0b09b3af]
	I0729 10:53:50.300854    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:53:50.311531    8358 logs.go:276] 2 containers: [8ad4866a16b3 a9a637b09ebc]
	I0729 10:53:50.311608    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:53:50.322105    8358 logs.go:276] 1 containers: [cc82106fc9da]
	I0729 10:53:50.322171    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:53:50.333021    8358 logs.go:276] 2 containers: [eee01c406f30 81df750d149b]
	I0729 10:53:50.333087    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:53:50.342964    8358 logs.go:276] 0 containers: []
	W0729 10:53:50.342974    8358 logs.go:278] No container was found matching "kindnet"
	I0729 10:53:50.343030    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:53:50.353024    8358 logs.go:276] 2 containers: [7610cd881aeb e2e7fb6e4b2d]
	I0729 10:53:50.353045    8358 logs.go:123] Gathering logs for kubelet ...
	I0729 10:53:50.353050    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:53:50.390343    8358 logs.go:123] Gathering logs for dmesg ...
	I0729 10:53:50.390351    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:53:50.394546    8358 logs.go:123] Gathering logs for coredns [fbff0b09b3af] ...
	I0729 10:53:50.394555    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff0b09b3af"
	I0729 10:53:50.405673    8358 logs.go:123] Gathering logs for kube-scheduler [a9a637b09ebc] ...
	I0729 10:53:50.405684    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9a637b09ebc"
	I0729 10:53:50.421596    8358 logs.go:123] Gathering logs for kube-controller-manager [81df750d149b] ...
	I0729 10:53:50.421605    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81df750d149b"
	I0729 10:53:50.434903    8358 logs.go:123] Gathering logs for kube-proxy [cc82106fc9da] ...
	I0729 10:53:50.434914    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc82106fc9da"
	I0729 10:53:50.448047    8358 logs.go:123] Gathering logs for storage-provisioner [7610cd881aeb] ...
	I0729 10:53:50.448057    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7610cd881aeb"
	I0729 10:53:50.459308    8358 logs.go:123] Gathering logs for Docker ...
	I0729 10:53:50.459321    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:53:50.482659    8358 logs.go:123] Gathering logs for container status ...
	I0729 10:53:50.482671    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:53:50.493896    8358 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:53:50.493909    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:53:50.527760    8358 logs.go:123] Gathering logs for kube-apiserver [2afc138a6e36] ...
	I0729 10:53:50.527773    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2afc138a6e36"
	I0729 10:53:50.568178    8358 logs.go:123] Gathering logs for etcd [68a6d48feaae] ...
	I0729 10:53:50.568191    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68a6d48feaae"
	I0729 10:53:50.584961    8358 logs.go:123] Gathering logs for kube-scheduler [8ad4866a16b3] ...
	I0729 10:53:50.584974    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ad4866a16b3"
	I0729 10:53:50.596422    8358 logs.go:123] Gathering logs for kube-controller-manager [eee01c406f30] ...
	I0729 10:53:50.596437    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eee01c406f30"
	I0729 10:53:50.615222    8358 logs.go:123] Gathering logs for storage-provisioner [e2e7fb6e4b2d] ...
	I0729 10:53:50.615234    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2e7fb6e4b2d"
	I0729 10:53:50.626678    8358 logs.go:123] Gathering logs for kube-apiserver [14a4ff9d95a0] ...
	I0729 10:53:50.626689    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14a4ff9d95a0"
	I0729 10:53:50.641072    8358 logs.go:123] Gathering logs for etcd [4494551802a6] ...
	I0729 10:53:50.641082    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4494551802a6"
	I0729 10:53:53.157252    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:53:58.159489    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:53:58.159731    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:53:58.184445    8358 logs.go:276] 2 containers: [14a4ff9d95a0 2afc138a6e36]
	I0729 10:53:58.184547    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:53:58.208027    8358 logs.go:276] 2 containers: [68a6d48feaae 4494551802a6]
	I0729 10:53:58.208097    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:53:58.221194    8358 logs.go:276] 1 containers: [fbff0b09b3af]
	I0729 10:53:58.221274    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:53:58.231744    8358 logs.go:276] 2 containers: [8ad4866a16b3 a9a637b09ebc]
	I0729 10:53:58.231818    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:53:58.242430    8358 logs.go:276] 1 containers: [cc82106fc9da]
	I0729 10:53:58.242506    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:53:58.253392    8358 logs.go:276] 2 containers: [eee01c406f30 81df750d149b]
	I0729 10:53:58.253462    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:53:58.263770    8358 logs.go:276] 0 containers: []
	W0729 10:53:58.263782    8358 logs.go:278] No container was found matching "kindnet"
	I0729 10:53:58.263843    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:53:58.275377    8358 logs.go:276] 2 containers: [7610cd881aeb e2e7fb6e4b2d]
	I0729 10:53:58.275398    8358 logs.go:123] Gathering logs for kube-apiserver [14a4ff9d95a0] ...
	I0729 10:53:58.275404    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14a4ff9d95a0"
	I0729 10:53:58.289045    8358 logs.go:123] Gathering logs for kube-apiserver [2afc138a6e36] ...
	I0729 10:53:58.289056    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2afc138a6e36"
	I0729 10:53:58.327463    8358 logs.go:123] Gathering logs for etcd [4494551802a6] ...
	I0729 10:53:58.327475    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4494551802a6"
	I0729 10:53:58.341531    8358 logs.go:123] Gathering logs for kube-scheduler [8ad4866a16b3] ...
	I0729 10:53:58.341542    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ad4866a16b3"
	I0729 10:53:58.352876    8358 logs.go:123] Gathering logs for kube-scheduler [a9a637b09ebc] ...
	I0729 10:53:58.352887    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9a637b09ebc"
	I0729 10:53:58.368880    8358 logs.go:123] Gathering logs for kube-proxy [cc82106fc9da] ...
	I0729 10:53:58.368891    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc82106fc9da"
	I0729 10:53:58.380375    8358 logs.go:123] Gathering logs for etcd [68a6d48feaae] ...
	I0729 10:53:58.380386    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68a6d48feaae"
	I0729 10:53:58.394461    8358 logs.go:123] Gathering logs for coredns [fbff0b09b3af] ...
	I0729 10:53:58.394472    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff0b09b3af"
	I0729 10:53:58.405430    8358 logs.go:123] Gathering logs for kube-controller-manager [eee01c406f30] ...
	I0729 10:53:58.405464    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eee01c406f30"
	I0729 10:53:58.422933    8358 logs.go:123] Gathering logs for Docker ...
	I0729 10:53:58.422945    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:53:58.448090    8358 logs.go:123] Gathering logs for kubelet ...
	I0729 10:53:58.448100    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:53:58.486107    8358 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:53:58.486122    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:53:58.522033    8358 logs.go:123] Gathering logs for container status ...
	I0729 10:53:58.522044    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:53:58.533857    8358 logs.go:123] Gathering logs for dmesg ...
	I0729 10:53:58.533869    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:53:58.538419    8358 logs.go:123] Gathering logs for kube-controller-manager [81df750d149b] ...
	I0729 10:53:58.538425    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81df750d149b"
	I0729 10:53:58.551573    8358 logs.go:123] Gathering logs for storage-provisioner [7610cd881aeb] ...
	I0729 10:53:58.551584    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7610cd881aeb"
	I0729 10:53:58.565484    8358 logs.go:123] Gathering logs for storage-provisioner [e2e7fb6e4b2d] ...
	I0729 10:53:58.565496    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2e7fb6e4b2d"
	I0729 10:54:01.078822    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:54:06.081575    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:54:06.081932    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:54:06.119479    8358 logs.go:276] 2 containers: [14a4ff9d95a0 2afc138a6e36]
	I0729 10:54:06.119586    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:54:06.136539    8358 logs.go:276] 2 containers: [68a6d48feaae 4494551802a6]
	I0729 10:54:06.136620    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:54:06.150193    8358 logs.go:276] 1 containers: [fbff0b09b3af]
	I0729 10:54:06.150272    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:54:06.163488    8358 logs.go:276] 2 containers: [8ad4866a16b3 a9a637b09ebc]
	I0729 10:54:06.163562    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:54:06.178741    8358 logs.go:276] 1 containers: [cc82106fc9da]
	I0729 10:54:06.178812    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:54:06.191934    8358 logs.go:276] 2 containers: [eee01c406f30 81df750d149b]
	I0729 10:54:06.192010    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:54:06.202718    8358 logs.go:276] 0 containers: []
	W0729 10:54:06.202731    8358 logs.go:278] No container was found matching "kindnet"
	I0729 10:54:06.202790    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:54:06.214689    8358 logs.go:276] 2 containers: [7610cd881aeb e2e7fb6e4b2d]
	I0729 10:54:06.214706    8358 logs.go:123] Gathering logs for container status ...
	I0729 10:54:06.214712    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:54:06.227133    8358 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:54:06.227145    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:54:06.268890    8358 logs.go:123] Gathering logs for kube-apiserver [14a4ff9d95a0] ...
	I0729 10:54:06.268902    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14a4ff9d95a0"
	I0729 10:54:06.284157    8358 logs.go:123] Gathering logs for kube-apiserver [2afc138a6e36] ...
	I0729 10:54:06.284169    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2afc138a6e36"
	I0729 10:54:06.324841    8358 logs.go:123] Gathering logs for coredns [fbff0b09b3af] ...
	I0729 10:54:06.324863    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff0b09b3af"
	I0729 10:54:06.340258    8358 logs.go:123] Gathering logs for kube-scheduler [8ad4866a16b3] ...
	I0729 10:54:06.340270    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ad4866a16b3"
	I0729 10:54:06.352572    8358 logs.go:123] Gathering logs for storage-provisioner [e2e7fb6e4b2d] ...
	I0729 10:54:06.352582    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2e7fb6e4b2d"
	I0729 10:54:06.363811    8358 logs.go:123] Gathering logs for etcd [68a6d48feaae] ...
	I0729 10:54:06.363823    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68a6d48feaae"
	I0729 10:54:06.382604    8358 logs.go:123] Gathering logs for kube-scheduler [a9a637b09ebc] ...
	I0729 10:54:06.382615    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9a637b09ebc"
	I0729 10:54:06.398296    8358 logs.go:123] Gathering logs for Docker ...
	I0729 10:54:06.398308    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:54:06.421284    8358 logs.go:123] Gathering logs for kubelet ...
	I0729 10:54:06.421292    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:54:06.456445    8358 logs.go:123] Gathering logs for dmesg ...
	I0729 10:54:06.456452    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:54:06.460816    8358 logs.go:123] Gathering logs for kube-proxy [cc82106fc9da] ...
	I0729 10:54:06.460824    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc82106fc9da"
	I0729 10:54:06.472253    8358 logs.go:123] Gathering logs for kube-controller-manager [eee01c406f30] ...
	I0729 10:54:06.472265    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eee01c406f30"
	I0729 10:54:06.491804    8358 logs.go:123] Gathering logs for storage-provisioner [7610cd881aeb] ...
	I0729 10:54:06.491815    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7610cd881aeb"
	I0729 10:54:06.503334    8358 logs.go:123] Gathering logs for etcd [4494551802a6] ...
	I0729 10:54:06.503344    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4494551802a6"
	I0729 10:54:06.518767    8358 logs.go:123] Gathering logs for kube-controller-manager [81df750d149b] ...
	I0729 10:54:06.518779    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81df750d149b"
	I0729 10:54:09.039917    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:54:14.042630    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:54:14.042847    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:54:14.062343    8358 logs.go:276] 2 containers: [14a4ff9d95a0 2afc138a6e36]
	I0729 10:54:14.062437    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:54:14.076256    8358 logs.go:276] 2 containers: [68a6d48feaae 4494551802a6]
	I0729 10:54:14.076344    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:54:14.088205    8358 logs.go:276] 1 containers: [fbff0b09b3af]
	I0729 10:54:14.088278    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:54:14.101287    8358 logs.go:276] 2 containers: [8ad4866a16b3 a9a637b09ebc]
	I0729 10:54:14.101362    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:54:14.111975    8358 logs.go:276] 1 containers: [cc82106fc9da]
	I0729 10:54:14.112041    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:54:14.122231    8358 logs.go:276] 2 containers: [eee01c406f30 81df750d149b]
	I0729 10:54:14.122328    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:54:14.132751    8358 logs.go:276] 0 containers: []
	W0729 10:54:14.132762    8358 logs.go:278] No container was found matching "kindnet"
	I0729 10:54:14.132817    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:54:14.143234    8358 logs.go:276] 2 containers: [7610cd881aeb e2e7fb6e4b2d]
	I0729 10:54:14.143250    8358 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:54:14.143257    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:54:14.182875    8358 logs.go:123] Gathering logs for kube-apiserver [14a4ff9d95a0] ...
	I0729 10:54:14.182889    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14a4ff9d95a0"
	I0729 10:54:14.196565    8358 logs.go:123] Gathering logs for kube-scheduler [a9a637b09ebc] ...
	I0729 10:54:14.196574    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9a637b09ebc"
	I0729 10:54:14.212617    8358 logs.go:123] Gathering logs for kube-controller-manager [81df750d149b] ...
	I0729 10:54:14.212631    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81df750d149b"
	I0729 10:54:14.227091    8358 logs.go:123] Gathering logs for dmesg ...
	I0729 10:54:14.227105    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:54:14.231454    8358 logs.go:123] Gathering logs for coredns [fbff0b09b3af] ...
	I0729 10:54:14.231460    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff0b09b3af"
	I0729 10:54:14.244585    8358 logs.go:123] Gathering logs for storage-provisioner [7610cd881aeb] ...
	I0729 10:54:14.244597    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7610cd881aeb"
	I0729 10:54:14.256821    8358 logs.go:123] Gathering logs for storage-provisioner [e2e7fb6e4b2d] ...
	I0729 10:54:14.256831    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2e7fb6e4b2d"
	I0729 10:54:14.268343    8358 logs.go:123] Gathering logs for Docker ...
	I0729 10:54:14.268357    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:54:14.292774    8358 logs.go:123] Gathering logs for kube-apiserver [2afc138a6e36] ...
	I0729 10:54:14.292787    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2afc138a6e36"
	I0729 10:54:14.331979    8358 logs.go:123] Gathering logs for kube-proxy [cc82106fc9da] ...
	I0729 10:54:14.331990    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc82106fc9da"
	I0729 10:54:14.343362    8358 logs.go:123] Gathering logs for container status ...
	I0729 10:54:14.343374    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:54:14.356383    8358 logs.go:123] Gathering logs for kubelet ...
	I0729 10:54:14.356395    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:54:14.392811    8358 logs.go:123] Gathering logs for etcd [4494551802a6] ...
	I0729 10:54:14.392822    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4494551802a6"
	I0729 10:54:14.410544    8358 logs.go:123] Gathering logs for kube-scheduler [8ad4866a16b3] ...
	I0729 10:54:14.410554    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ad4866a16b3"
	I0729 10:54:14.422146    8358 logs.go:123] Gathering logs for kube-controller-manager [eee01c406f30] ...
	I0729 10:54:14.422156    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eee01c406f30"
	I0729 10:54:14.439328    8358 logs.go:123] Gathering logs for etcd [68a6d48feaae] ...
	I0729 10:54:14.439340    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68a6d48feaae"
	I0729 10:54:16.957253    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:54:21.959851    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:54:21.960045    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:54:21.977496    8358 logs.go:276] 2 containers: [14a4ff9d95a0 2afc138a6e36]
	I0729 10:54:21.977586    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:54:21.991156    8358 logs.go:276] 2 containers: [68a6d48feaae 4494551802a6]
	I0729 10:54:21.991234    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:54:22.002245    8358 logs.go:276] 1 containers: [fbff0b09b3af]
	I0729 10:54:22.002316    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:54:22.012937    8358 logs.go:276] 2 containers: [8ad4866a16b3 a9a637b09ebc]
	I0729 10:54:22.013006    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:54:22.023738    8358 logs.go:276] 1 containers: [cc82106fc9da]
	I0729 10:54:22.023807    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:54:22.035992    8358 logs.go:276] 2 containers: [eee01c406f30 81df750d149b]
	I0729 10:54:22.036069    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:54:22.046081    8358 logs.go:276] 0 containers: []
	W0729 10:54:22.046096    8358 logs.go:278] No container was found matching "kindnet"
	I0729 10:54:22.046154    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:54:22.056808    8358 logs.go:276] 2 containers: [7610cd881aeb e2e7fb6e4b2d]
	I0729 10:54:22.056828    8358 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:54:22.056834    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:54:22.091090    8358 logs.go:123] Gathering logs for kube-apiserver [14a4ff9d95a0] ...
	I0729 10:54:22.091102    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14a4ff9d95a0"
	I0729 10:54:22.105464    8358 logs.go:123] Gathering logs for kube-apiserver [2afc138a6e36] ...
	I0729 10:54:22.105477    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2afc138a6e36"
	I0729 10:54:22.143894    8358 logs.go:123] Gathering logs for coredns [fbff0b09b3af] ...
	I0729 10:54:22.143906    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff0b09b3af"
	I0729 10:54:22.155282    8358 logs.go:123] Gathering logs for kube-controller-manager [81df750d149b] ...
	I0729 10:54:22.155294    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81df750d149b"
	I0729 10:54:22.167983    8358 logs.go:123] Gathering logs for kubelet ...
	I0729 10:54:22.167997    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:54:22.206030    8358 logs.go:123] Gathering logs for etcd [4494551802a6] ...
	I0729 10:54:22.206038    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4494551802a6"
	I0729 10:54:22.220374    8358 logs.go:123] Gathering logs for kube-scheduler [a9a637b09ebc] ...
	I0729 10:54:22.220385    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9a637b09ebc"
	I0729 10:54:22.242872    8358 logs.go:123] Gathering logs for storage-provisioner [7610cd881aeb] ...
	I0729 10:54:22.242883    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7610cd881aeb"
	I0729 10:54:22.254601    8358 logs.go:123] Gathering logs for Docker ...
	I0729 10:54:22.254612    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:54:22.279446    8358 logs.go:123] Gathering logs for container status ...
	I0729 10:54:22.279454    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:54:22.290986    8358 logs.go:123] Gathering logs for dmesg ...
	I0729 10:54:22.290999    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:54:22.295070    8358 logs.go:123] Gathering logs for etcd [68a6d48feaae] ...
	I0729 10:54:22.295077    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68a6d48feaae"
	I0729 10:54:22.308940    8358 logs.go:123] Gathering logs for kube-scheduler [8ad4866a16b3] ...
	I0729 10:54:22.308949    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ad4866a16b3"
	I0729 10:54:22.321448    8358 logs.go:123] Gathering logs for kube-proxy [cc82106fc9da] ...
	I0729 10:54:22.321461    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc82106fc9da"
	I0729 10:54:22.345239    8358 logs.go:123] Gathering logs for kube-controller-manager [eee01c406f30] ...
	I0729 10:54:22.345252    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eee01c406f30"
	I0729 10:54:22.369943    8358 logs.go:123] Gathering logs for storage-provisioner [e2e7fb6e4b2d] ...
	I0729 10:54:22.369958    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2e7fb6e4b2d"
	I0729 10:54:24.883582    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:54:29.885988    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:54:29.886224    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:54:29.909410    8358 logs.go:276] 2 containers: [14a4ff9d95a0 2afc138a6e36]
	I0729 10:54:29.909497    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:54:29.921278    8358 logs.go:276] 2 containers: [68a6d48feaae 4494551802a6]
	I0729 10:54:29.921349    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:54:29.932621    8358 logs.go:276] 1 containers: [fbff0b09b3af]
	I0729 10:54:29.932696    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:54:29.943381    8358 logs.go:276] 2 containers: [8ad4866a16b3 a9a637b09ebc]
	I0729 10:54:29.943453    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:54:29.957512    8358 logs.go:276] 1 containers: [cc82106fc9da]
	I0729 10:54:29.957575    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:54:29.967902    8358 logs.go:276] 2 containers: [eee01c406f30 81df750d149b]
	I0729 10:54:29.967977    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:54:29.978308    8358 logs.go:276] 0 containers: []
	W0729 10:54:29.978320    8358 logs.go:278] No container was found matching "kindnet"
	I0729 10:54:29.978379    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:54:29.989448    8358 logs.go:276] 2 containers: [7610cd881aeb e2e7fb6e4b2d]
	I0729 10:54:29.989466    8358 logs.go:123] Gathering logs for Docker ...
	I0729 10:54:29.989473    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:54:30.013073    8358 logs.go:123] Gathering logs for container status ...
	I0729 10:54:30.013082    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:54:30.024677    8358 logs.go:123] Gathering logs for kubelet ...
	I0729 10:54:30.024687    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:54:30.062006    8358 logs.go:123] Gathering logs for kube-apiserver [2afc138a6e36] ...
	I0729 10:54:30.062017    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2afc138a6e36"
	I0729 10:54:30.101106    8358 logs.go:123] Gathering logs for kube-scheduler [a9a637b09ebc] ...
	I0729 10:54:30.101120    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9a637b09ebc"
	I0729 10:54:30.117688    8358 logs.go:123] Gathering logs for kube-apiserver [14a4ff9d95a0] ...
	I0729 10:54:30.117700    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14a4ff9d95a0"
	I0729 10:54:30.131827    8358 logs.go:123] Gathering logs for kube-scheduler [8ad4866a16b3] ...
	I0729 10:54:30.131838    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ad4866a16b3"
	I0729 10:54:30.147139    8358 logs.go:123] Gathering logs for kube-controller-manager [81df750d149b] ...
	I0729 10:54:30.147149    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81df750d149b"
	I0729 10:54:30.160801    8358 logs.go:123] Gathering logs for kube-proxy [cc82106fc9da] ...
	I0729 10:54:30.160813    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc82106fc9da"
	I0729 10:54:30.172402    8358 logs.go:123] Gathering logs for kube-controller-manager [eee01c406f30] ...
	I0729 10:54:30.172413    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eee01c406f30"
	I0729 10:54:30.189431    8358 logs.go:123] Gathering logs for storage-provisioner [e2e7fb6e4b2d] ...
	I0729 10:54:30.189443    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2e7fb6e4b2d"
	I0729 10:54:30.201665    8358 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:54:30.201676    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:54:30.238151    8358 logs.go:123] Gathering logs for etcd [4494551802a6] ...
	I0729 10:54:30.238163    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4494551802a6"
	I0729 10:54:30.254092    8358 logs.go:123] Gathering logs for coredns [fbff0b09b3af] ...
	I0729 10:54:30.254106    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff0b09b3af"
	I0729 10:54:30.270552    8358 logs.go:123] Gathering logs for dmesg ...
	I0729 10:54:30.270565    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:54:30.274590    8358 logs.go:123] Gathering logs for etcd [68a6d48feaae] ...
	I0729 10:54:30.274596    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68a6d48feaae"
	I0729 10:54:30.288436    8358 logs.go:123] Gathering logs for storage-provisioner [7610cd881aeb] ...
	I0729 10:54:30.288446    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7610cd881aeb"
	I0729 10:54:32.801980    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:54:37.804239    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:54:37.804359    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:54:37.815845    8358 logs.go:276] 2 containers: [14a4ff9d95a0 2afc138a6e36]
	I0729 10:54:37.815926    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:54:37.834573    8358 logs.go:276] 2 containers: [68a6d48feaae 4494551802a6]
	I0729 10:54:37.834651    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:54:37.847048    8358 logs.go:276] 1 containers: [fbff0b09b3af]
	I0729 10:54:37.847116    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:54:37.857564    8358 logs.go:276] 2 containers: [8ad4866a16b3 a9a637b09ebc]
	I0729 10:54:37.857636    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:54:37.868353    8358 logs.go:276] 1 containers: [cc82106fc9da]
	I0729 10:54:37.868423    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:54:37.879125    8358 logs.go:276] 2 containers: [eee01c406f30 81df750d149b]
	I0729 10:54:37.879198    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:54:37.893600    8358 logs.go:276] 0 containers: []
	W0729 10:54:37.893612    8358 logs.go:278] No container was found matching "kindnet"
	I0729 10:54:37.893672    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:54:37.904316    8358 logs.go:276] 2 containers: [7610cd881aeb e2e7fb6e4b2d]
	I0729 10:54:37.904333    8358 logs.go:123] Gathering logs for kube-apiserver [2afc138a6e36] ...
	I0729 10:54:37.904339    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2afc138a6e36"
	I0729 10:54:37.947014    8358 logs.go:123] Gathering logs for etcd [4494551802a6] ...
	I0729 10:54:37.947025    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4494551802a6"
	I0729 10:54:37.961094    8358 logs.go:123] Gathering logs for kube-scheduler [a9a637b09ebc] ...
	I0729 10:54:37.961105    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9a637b09ebc"
	I0729 10:54:37.977655    8358 logs.go:123] Gathering logs for kube-proxy [cc82106fc9da] ...
	I0729 10:54:37.977671    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc82106fc9da"
	I0729 10:54:37.989233    8358 logs.go:123] Gathering logs for kube-controller-manager [eee01c406f30] ...
	I0729 10:54:37.989250    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eee01c406f30"
	I0729 10:54:38.005956    8358 logs.go:123] Gathering logs for kube-controller-manager [81df750d149b] ...
	I0729 10:54:38.005969    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81df750d149b"
	I0729 10:54:38.019889    8358 logs.go:123] Gathering logs for dmesg ...
	I0729 10:54:38.019904    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:54:38.024738    8358 logs.go:123] Gathering logs for container status ...
	I0729 10:54:38.024748    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:54:38.036571    8358 logs.go:123] Gathering logs for etcd [68a6d48feaae] ...
	I0729 10:54:38.036582    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68a6d48feaae"
	I0729 10:54:38.050661    8358 logs.go:123] Gathering logs for kube-scheduler [8ad4866a16b3] ...
	I0729 10:54:38.050675    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ad4866a16b3"
	I0729 10:54:38.062898    8358 logs.go:123] Gathering logs for storage-provisioner [7610cd881aeb] ...
	I0729 10:54:38.062906    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7610cd881aeb"
	I0729 10:54:38.074265    8358 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:54:38.074280    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:54:38.112601    8358 logs.go:123] Gathering logs for kube-apiserver [14a4ff9d95a0] ...
	I0729 10:54:38.112614    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14a4ff9d95a0"
	I0729 10:54:38.127093    8358 logs.go:123] Gathering logs for coredns [fbff0b09b3af] ...
	I0729 10:54:38.127107    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff0b09b3af"
	I0729 10:54:38.138750    8358 logs.go:123] Gathering logs for storage-provisioner [e2e7fb6e4b2d] ...
	I0729 10:54:38.138761    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2e7fb6e4b2d"
	I0729 10:54:38.149765    8358 logs.go:123] Gathering logs for Docker ...
	I0729 10:54:38.149774    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:54:38.174102    8358 logs.go:123] Gathering logs for kubelet ...
	I0729 10:54:38.174108    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:54:40.714114    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:54:45.716351    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:54:45.716526    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:54:45.731246    8358 logs.go:276] 2 containers: [14a4ff9d95a0 2afc138a6e36]
	I0729 10:54:45.731328    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:54:45.743386    8358 logs.go:276] 2 containers: [68a6d48feaae 4494551802a6]
	I0729 10:54:45.743459    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:54:45.753815    8358 logs.go:276] 1 containers: [fbff0b09b3af]
	I0729 10:54:45.753880    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:54:45.764376    8358 logs.go:276] 2 containers: [8ad4866a16b3 a9a637b09ebc]
	I0729 10:54:45.764446    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:54:45.775917    8358 logs.go:276] 1 containers: [cc82106fc9da]
	I0729 10:54:45.775980    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:54:45.786580    8358 logs.go:276] 2 containers: [eee01c406f30 81df750d149b]
	I0729 10:54:45.786645    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:54:45.796975    8358 logs.go:276] 0 containers: []
	W0729 10:54:45.796988    8358 logs.go:278] No container was found matching "kindnet"
	I0729 10:54:45.797044    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:54:45.810903    8358 logs.go:276] 2 containers: [7610cd881aeb e2e7fb6e4b2d]
	I0729 10:54:45.810921    8358 logs.go:123] Gathering logs for etcd [68a6d48feaae] ...
	I0729 10:54:45.810927    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68a6d48feaae"
	I0729 10:54:45.825613    8358 logs.go:123] Gathering logs for etcd [4494551802a6] ...
	I0729 10:54:45.825625    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4494551802a6"
	I0729 10:54:45.840056    8358 logs.go:123] Gathering logs for coredns [fbff0b09b3af] ...
	I0729 10:54:45.840067    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff0b09b3af"
	I0729 10:54:45.851407    8358 logs.go:123] Gathering logs for kube-proxy [cc82106fc9da] ...
	I0729 10:54:45.851419    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc82106fc9da"
	I0729 10:54:45.863171    8358 logs.go:123] Gathering logs for dmesg ...
	I0729 10:54:45.863181    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:54:45.867606    8358 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:54:45.867613    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:54:45.902739    8358 logs.go:123] Gathering logs for kube-apiserver [14a4ff9d95a0] ...
	I0729 10:54:45.902749    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14a4ff9d95a0"
	I0729 10:54:45.920805    8358 logs.go:123] Gathering logs for kube-apiserver [2afc138a6e36] ...
	I0729 10:54:45.920816    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2afc138a6e36"
	I0729 10:54:45.959740    8358 logs.go:123] Gathering logs for kube-controller-manager [eee01c406f30] ...
	I0729 10:54:45.959752    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eee01c406f30"
	I0729 10:54:45.982260    8358 logs.go:123] Gathering logs for Docker ...
	I0729 10:54:45.982271    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:54:46.005091    8358 logs.go:123] Gathering logs for kubelet ...
	I0729 10:54:46.005101    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:54:46.041280    8358 logs.go:123] Gathering logs for kube-scheduler [8ad4866a16b3] ...
	I0729 10:54:46.041291    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ad4866a16b3"
	I0729 10:54:46.052928    8358 logs.go:123] Gathering logs for container status ...
	I0729 10:54:46.052940    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:54:46.065583    8358 logs.go:123] Gathering logs for kube-scheduler [a9a637b09ebc] ...
	I0729 10:54:46.065594    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9a637b09ebc"
	I0729 10:54:46.081070    8358 logs.go:123] Gathering logs for kube-controller-manager [81df750d149b] ...
	I0729 10:54:46.081083    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81df750d149b"
	I0729 10:54:46.099136    8358 logs.go:123] Gathering logs for storage-provisioner [7610cd881aeb] ...
	I0729 10:54:46.099150    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7610cd881aeb"
	I0729 10:54:46.110962    8358 logs.go:123] Gathering logs for storage-provisioner [e2e7fb6e4b2d] ...
	I0729 10:54:46.110973    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2e7fb6e4b2d"
	I0729 10:54:48.624044    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:54:53.626229    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:54:53.626466    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:54:53.645334    8358 logs.go:276] 2 containers: [14a4ff9d95a0 2afc138a6e36]
	I0729 10:54:53.645418    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:54:53.659413    8358 logs.go:276] 2 containers: [68a6d48feaae 4494551802a6]
	I0729 10:54:53.659489    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:54:53.671905    8358 logs.go:276] 1 containers: [fbff0b09b3af]
	I0729 10:54:53.671970    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:54:53.682559    8358 logs.go:276] 2 containers: [8ad4866a16b3 a9a637b09ebc]
	I0729 10:54:53.682625    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:54:53.692995    8358 logs.go:276] 1 containers: [cc82106fc9da]
	I0729 10:54:53.693058    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:54:53.703624    8358 logs.go:276] 2 containers: [eee01c406f30 81df750d149b]
	I0729 10:54:53.703697    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:54:53.714250    8358 logs.go:276] 0 containers: []
	W0729 10:54:53.714261    8358 logs.go:278] No container was found matching "kindnet"
	I0729 10:54:53.714316    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:54:53.725119    8358 logs.go:276] 2 containers: [7610cd881aeb e2e7fb6e4b2d]
	I0729 10:54:53.725136    8358 logs.go:123] Gathering logs for kube-apiserver [14a4ff9d95a0] ...
	I0729 10:54:53.725143    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14a4ff9d95a0"
	I0729 10:54:53.739315    8358 logs.go:123] Gathering logs for etcd [4494551802a6] ...
	I0729 10:54:53.739329    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4494551802a6"
	I0729 10:54:53.752875    8358 logs.go:123] Gathering logs for kube-scheduler [8ad4866a16b3] ...
	I0729 10:54:53.752890    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ad4866a16b3"
	I0729 10:54:53.764189    8358 logs.go:123] Gathering logs for kube-scheduler [a9a637b09ebc] ...
	I0729 10:54:53.764200    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9a637b09ebc"
	I0729 10:54:53.780143    8358 logs.go:123] Gathering logs for kube-controller-manager [eee01c406f30] ...
	I0729 10:54:53.780158    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eee01c406f30"
	I0729 10:54:53.796787    8358 logs.go:123] Gathering logs for kube-controller-manager [81df750d149b] ...
	I0729 10:54:53.796801    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81df750d149b"
	I0729 10:54:53.810034    8358 logs.go:123] Gathering logs for storage-provisioner [7610cd881aeb] ...
	I0729 10:54:53.810044    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7610cd881aeb"
	I0729 10:54:53.821464    8358 logs.go:123] Gathering logs for storage-provisioner [e2e7fb6e4b2d] ...
	I0729 10:54:53.821476    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2e7fb6e4b2d"
	I0729 10:54:53.833127    8358 logs.go:123] Gathering logs for container status ...
	I0729 10:54:53.833139    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:54:53.845011    8358 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:54:53.845027    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:54:53.880168    8358 logs.go:123] Gathering logs for coredns [fbff0b09b3af] ...
	I0729 10:54:53.880179    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff0b09b3af"
	I0729 10:54:53.891641    8358 logs.go:123] Gathering logs for kube-apiserver [2afc138a6e36] ...
	I0729 10:54:53.891653    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2afc138a6e36"
	I0729 10:54:53.928862    8358 logs.go:123] Gathering logs for etcd [68a6d48feaae] ...
	I0729 10:54:53.928874    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68a6d48feaae"
	I0729 10:54:53.943177    8358 logs.go:123] Gathering logs for kube-proxy [cc82106fc9da] ...
	I0729 10:54:53.943191    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc82106fc9da"
	I0729 10:54:53.955089    8358 logs.go:123] Gathering logs for Docker ...
	I0729 10:54:53.955100    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:54:53.979581    8358 logs.go:123] Gathering logs for kubelet ...
	I0729 10:54:53.979589    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:54:54.018430    8358 logs.go:123] Gathering logs for dmesg ...
	I0729 10:54:54.018437    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:54:56.524456    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:55:01.526682    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:55:01.526817    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:55:01.547109    8358 logs.go:276] 2 containers: [14a4ff9d95a0 2afc138a6e36]
	I0729 10:55:01.547213    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:55:01.565531    8358 logs.go:276] 2 containers: [68a6d48feaae 4494551802a6]
	I0729 10:55:01.565620    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:55:01.577931    8358 logs.go:276] 1 containers: [fbff0b09b3af]
	I0729 10:55:01.578004    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:55:01.592342    8358 logs.go:276] 2 containers: [8ad4866a16b3 a9a637b09ebc]
	I0729 10:55:01.592418    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:55:01.602862    8358 logs.go:276] 1 containers: [cc82106fc9da]
	I0729 10:55:01.602932    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:55:01.613020    8358 logs.go:276] 2 containers: [eee01c406f30 81df750d149b]
	I0729 10:55:01.613091    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:55:01.623133    8358 logs.go:276] 0 containers: []
	W0729 10:55:01.623145    8358 logs.go:278] No container was found matching "kindnet"
	I0729 10:55:01.623204    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:55:01.633571    8358 logs.go:276] 2 containers: [7610cd881aeb e2e7fb6e4b2d]
	I0729 10:55:01.633590    8358 logs.go:123] Gathering logs for kube-apiserver [14a4ff9d95a0] ...
	I0729 10:55:01.633597    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14a4ff9d95a0"
	I0729 10:55:01.647218    8358 logs.go:123] Gathering logs for coredns [fbff0b09b3af] ...
	I0729 10:55:01.647228    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff0b09b3af"
	I0729 10:55:01.661917    8358 logs.go:123] Gathering logs for kube-proxy [cc82106fc9da] ...
	I0729 10:55:01.661929    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc82106fc9da"
	I0729 10:55:01.674906    8358 logs.go:123] Gathering logs for storage-provisioner [7610cd881aeb] ...
	I0729 10:55:01.674916    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7610cd881aeb"
	I0729 10:55:01.687015    8358 logs.go:123] Gathering logs for kubelet ...
	I0729 10:55:01.687025    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:55:01.723937    8358 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:55:01.723950    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:55:01.758717    8358 logs.go:123] Gathering logs for storage-provisioner [e2e7fb6e4b2d] ...
	I0729 10:55:01.758729    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2e7fb6e4b2d"
	I0729 10:55:01.770488    8358 logs.go:123] Gathering logs for kube-apiserver [2afc138a6e36] ...
	I0729 10:55:01.770501    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2afc138a6e36"
	I0729 10:55:01.808557    8358 logs.go:123] Gathering logs for kube-controller-manager [81df750d149b] ...
	I0729 10:55:01.808569    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81df750d149b"
	I0729 10:55:01.822121    8358 logs.go:123] Gathering logs for kube-scheduler [8ad4866a16b3] ...
	I0729 10:55:01.822136    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ad4866a16b3"
	I0729 10:55:01.834456    8358 logs.go:123] Gathering logs for kube-scheduler [a9a637b09ebc] ...
	I0729 10:55:01.834466    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9a637b09ebc"
	I0729 10:55:01.850533    8358 logs.go:123] Gathering logs for kube-controller-manager [eee01c406f30] ...
	I0729 10:55:01.850546    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eee01c406f30"
	I0729 10:55:01.868933    8358 logs.go:123] Gathering logs for Docker ...
	I0729 10:55:01.868946    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:55:01.893348    8358 logs.go:123] Gathering logs for dmesg ...
	I0729 10:55:01.893356    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:55:01.897366    8358 logs.go:123] Gathering logs for etcd [4494551802a6] ...
	I0729 10:55:01.897371    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4494551802a6"
	I0729 10:55:01.911292    8358 logs.go:123] Gathering logs for etcd [68a6d48feaae] ...
	I0729 10:55:01.911301    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68a6d48feaae"
	I0729 10:55:01.925170    8358 logs.go:123] Gathering logs for container status ...
	I0729 10:55:01.925186    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:55:04.439080    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:55:09.441708    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:55:09.441993    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:55:09.474909    8358 logs.go:276] 2 containers: [14a4ff9d95a0 2afc138a6e36]
	I0729 10:55:09.475045    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:55:09.494069    8358 logs.go:276] 2 containers: [68a6d48feaae 4494551802a6]
	I0729 10:55:09.494167    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:55:09.508493    8358 logs.go:276] 1 containers: [fbff0b09b3af]
	I0729 10:55:09.508576    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:55:09.521286    8358 logs.go:276] 2 containers: [8ad4866a16b3 a9a637b09ebc]
	I0729 10:55:09.521369    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:55:09.533982    8358 logs.go:276] 1 containers: [cc82106fc9da]
	I0729 10:55:09.534052    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:55:09.544888    8358 logs.go:276] 2 containers: [eee01c406f30 81df750d149b]
	I0729 10:55:09.544958    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:55:09.557206    8358 logs.go:276] 0 containers: []
	W0729 10:55:09.557219    8358 logs.go:278] No container was found matching "kindnet"
	I0729 10:55:09.557284    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:55:09.572044    8358 logs.go:276] 2 containers: [7610cd881aeb e2e7fb6e4b2d]
	I0729 10:55:09.572061    8358 logs.go:123] Gathering logs for kube-apiserver [2afc138a6e36] ...
	I0729 10:55:09.572066    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2afc138a6e36"
	I0729 10:55:09.609486    8358 logs.go:123] Gathering logs for kube-scheduler [a9a637b09ebc] ...
	I0729 10:55:09.609497    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9a637b09ebc"
	I0729 10:55:09.625222    8358 logs.go:123] Gathering logs for Docker ...
	I0729 10:55:09.625231    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:55:09.648407    8358 logs.go:123] Gathering logs for container status ...
	I0729 10:55:09.648418    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:55:09.660183    8358 logs.go:123] Gathering logs for kubelet ...
	I0729 10:55:09.660192    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:55:09.698692    8358 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:55:09.698701    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:55:09.735505    8358 logs.go:123] Gathering logs for kube-scheduler [8ad4866a16b3] ...
	I0729 10:55:09.735518    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ad4866a16b3"
	I0729 10:55:09.748128    8358 logs.go:123] Gathering logs for kube-proxy [cc82106fc9da] ...
	I0729 10:55:09.748139    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc82106fc9da"
	I0729 10:55:09.760310    8358 logs.go:123] Gathering logs for kube-controller-manager [eee01c406f30] ...
	I0729 10:55:09.760321    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eee01c406f30"
	I0729 10:55:09.777811    8358 logs.go:123] Gathering logs for storage-provisioner [7610cd881aeb] ...
	I0729 10:55:09.777823    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7610cd881aeb"
	I0729 10:55:09.789366    8358 logs.go:123] Gathering logs for kube-apiserver [14a4ff9d95a0] ...
	I0729 10:55:09.789376    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14a4ff9d95a0"
	I0729 10:55:09.803564    8358 logs.go:123] Gathering logs for etcd [68a6d48feaae] ...
	I0729 10:55:09.803577    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68a6d48feaae"
	I0729 10:55:09.817626    8358 logs.go:123] Gathering logs for dmesg ...
	I0729 10:55:09.817637    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:55:09.821970    8358 logs.go:123] Gathering logs for etcd [4494551802a6] ...
	I0729 10:55:09.821978    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4494551802a6"
	I0729 10:55:09.836776    8358 logs.go:123] Gathering logs for coredns [fbff0b09b3af] ...
	I0729 10:55:09.836786    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff0b09b3af"
	I0729 10:55:09.848287    8358 logs.go:123] Gathering logs for kube-controller-manager [81df750d149b] ...
	I0729 10:55:09.848298    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81df750d149b"
	I0729 10:55:09.862643    8358 logs.go:123] Gathering logs for storage-provisioner [e2e7fb6e4b2d] ...
	I0729 10:55:09.862652    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2e7fb6e4b2d"
	I0729 10:55:12.376552    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:55:17.378808    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:55:17.378955    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:55:17.401739    8358 logs.go:276] 2 containers: [14a4ff9d95a0 2afc138a6e36]
	I0729 10:55:17.401836    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:55:17.416797    8358 logs.go:276] 2 containers: [68a6d48feaae 4494551802a6]
	I0729 10:55:17.416878    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:55:17.429284    8358 logs.go:276] 1 containers: [fbff0b09b3af]
	I0729 10:55:17.429355    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:55:17.440511    8358 logs.go:276] 2 containers: [8ad4866a16b3 a9a637b09ebc]
	I0729 10:55:17.440585    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:55:17.451053    8358 logs.go:276] 1 containers: [cc82106fc9da]
	I0729 10:55:17.451123    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:55:17.461652    8358 logs.go:276] 2 containers: [eee01c406f30 81df750d149b]
	I0729 10:55:17.461724    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:55:17.472023    8358 logs.go:276] 0 containers: []
	W0729 10:55:17.472034    8358 logs.go:278] No container was found matching "kindnet"
	I0729 10:55:17.472097    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:55:17.483144    8358 logs.go:276] 2 containers: [7610cd881aeb e2e7fb6e4b2d]
	I0729 10:55:17.483161    8358 logs.go:123] Gathering logs for dmesg ...
	I0729 10:55:17.483167    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:55:17.488160    8358 logs.go:123] Gathering logs for etcd [4494551802a6] ...
	I0729 10:55:17.488167    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4494551802a6"
	I0729 10:55:17.502671    8358 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:55:17.502681    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:55:17.538117    8358 logs.go:123] Gathering logs for kube-apiserver [14a4ff9d95a0] ...
	I0729 10:55:17.538128    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14a4ff9d95a0"
	I0729 10:55:17.561744    8358 logs.go:123] Gathering logs for kube-apiserver [2afc138a6e36] ...
	I0729 10:55:17.561753    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2afc138a6e36"
	I0729 10:55:17.599249    8358 logs.go:123] Gathering logs for storage-provisioner [e2e7fb6e4b2d] ...
	I0729 10:55:17.599259    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2e7fb6e4b2d"
	I0729 10:55:17.613585    8358 logs.go:123] Gathering logs for container status ...
	I0729 10:55:17.613596    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:55:17.625251    8358 logs.go:123] Gathering logs for etcd [68a6d48feaae] ...
	I0729 10:55:17.625262    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68a6d48feaae"
	I0729 10:55:17.644084    8358 logs.go:123] Gathering logs for coredns [fbff0b09b3af] ...
	I0729 10:55:17.644096    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff0b09b3af"
	I0729 10:55:17.655385    8358 logs.go:123] Gathering logs for kube-scheduler [8ad4866a16b3] ...
	I0729 10:55:17.655397    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ad4866a16b3"
	I0729 10:55:17.667243    8358 logs.go:123] Gathering logs for kube-proxy [cc82106fc9da] ...
	I0729 10:55:17.667254    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc82106fc9da"
	I0729 10:55:17.682096    8358 logs.go:123] Gathering logs for kube-controller-manager [81df750d149b] ...
	I0729 10:55:17.682106    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81df750d149b"
	I0729 10:55:17.698494    8358 logs.go:123] Gathering logs for storage-provisioner [7610cd881aeb] ...
	I0729 10:55:17.698508    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7610cd881aeb"
	I0729 10:55:17.710334    8358 logs.go:123] Gathering logs for kubelet ...
	I0729 10:55:17.710350    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:55:17.747695    8358 logs.go:123] Gathering logs for kube-scheduler [a9a637b09ebc] ...
	I0729 10:55:17.747709    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9a637b09ebc"
	I0729 10:55:17.765007    8358 logs.go:123] Gathering logs for kube-controller-manager [eee01c406f30] ...
	I0729 10:55:17.765021    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eee01c406f30"
	I0729 10:55:17.782907    8358 logs.go:123] Gathering logs for Docker ...
	I0729 10:55:17.782917    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:55:20.309268    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:55:25.311550    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:55:25.311699    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:55:25.325178    8358 logs.go:276] 2 containers: [14a4ff9d95a0 2afc138a6e36]
	I0729 10:55:25.325262    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:55:25.336299    8358 logs.go:276] 2 containers: [68a6d48feaae 4494551802a6]
	I0729 10:55:25.336405    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:55:25.347616    8358 logs.go:276] 1 containers: [fbff0b09b3af]
	I0729 10:55:25.347694    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:55:25.359729    8358 logs.go:276] 2 containers: [8ad4866a16b3 a9a637b09ebc]
	I0729 10:55:25.359805    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:55:25.370591    8358 logs.go:276] 1 containers: [cc82106fc9da]
	I0729 10:55:25.370658    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:55:25.381019    8358 logs.go:276] 2 containers: [eee01c406f30 81df750d149b]
	I0729 10:55:25.381092    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:55:25.390732    8358 logs.go:276] 0 containers: []
	W0729 10:55:25.390743    8358 logs.go:278] No container was found matching "kindnet"
	I0729 10:55:25.390804    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:55:25.405410    8358 logs.go:276] 2 containers: [7610cd881aeb e2e7fb6e4b2d]
	I0729 10:55:25.405427    8358 logs.go:123] Gathering logs for kube-controller-manager [81df750d149b] ...
	I0729 10:55:25.405432    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81df750d149b"
	I0729 10:55:25.418064    8358 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:55:25.418078    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:55:25.452928    8358 logs.go:123] Gathering logs for kube-apiserver [14a4ff9d95a0] ...
	I0729 10:55:25.452945    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14a4ff9d95a0"
	I0729 10:55:25.467018    8358 logs.go:123] Gathering logs for etcd [68a6d48feaae] ...
	I0729 10:55:25.467032    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68a6d48feaae"
	I0729 10:55:25.481796    8358 logs.go:123] Gathering logs for kube-scheduler [a9a637b09ebc] ...
	I0729 10:55:25.481809    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9a637b09ebc"
	I0729 10:55:25.496982    8358 logs.go:123] Gathering logs for kube-proxy [cc82106fc9da] ...
	I0729 10:55:25.496992    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc82106fc9da"
	I0729 10:55:25.508642    8358 logs.go:123] Gathering logs for container status ...
	I0729 10:55:25.508653    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:55:25.524556    8358 logs.go:123] Gathering logs for dmesg ...
	I0729 10:55:25.524574    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:55:25.529539    8358 logs.go:123] Gathering logs for kube-apiserver [2afc138a6e36] ...
	I0729 10:55:25.529549    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2afc138a6e36"
	I0729 10:55:25.566852    8358 logs.go:123] Gathering logs for kube-scheduler [8ad4866a16b3] ...
	I0729 10:55:25.566866    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ad4866a16b3"
	I0729 10:55:25.582933    8358 logs.go:123] Gathering logs for kube-controller-manager [eee01c406f30] ...
	I0729 10:55:25.582947    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eee01c406f30"
	I0729 10:55:25.600096    8358 logs.go:123] Gathering logs for storage-provisioner [e2e7fb6e4b2d] ...
	I0729 10:55:25.600109    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2e7fb6e4b2d"
	I0729 10:55:25.612736    8358 logs.go:123] Gathering logs for kubelet ...
	I0729 10:55:25.612748    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:55:25.649080    8358 logs.go:123] Gathering logs for etcd [4494551802a6] ...
	I0729 10:55:25.649092    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4494551802a6"
	I0729 10:55:25.664197    8358 logs.go:123] Gathering logs for coredns [fbff0b09b3af] ...
	I0729 10:55:25.664210    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff0b09b3af"
	I0729 10:55:25.675809    8358 logs.go:123] Gathering logs for storage-provisioner [7610cd881aeb] ...
	I0729 10:55:25.675820    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7610cd881aeb"
	I0729 10:55:25.687875    8358 logs.go:123] Gathering logs for Docker ...
	I0729 10:55:25.687890    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:55:28.212442    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:55:33.214431    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:55:33.214630    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:55:33.240185    8358 logs.go:276] 2 containers: [14a4ff9d95a0 2afc138a6e36]
	I0729 10:55:33.240304    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:55:33.257131    8358 logs.go:276] 2 containers: [68a6d48feaae 4494551802a6]
	I0729 10:55:33.257227    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:55:33.272497    8358 logs.go:276] 1 containers: [fbff0b09b3af]
	I0729 10:55:33.272573    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:55:33.287544    8358 logs.go:276] 2 containers: [8ad4866a16b3 a9a637b09ebc]
	I0729 10:55:33.287608    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:55:33.297467    8358 logs.go:276] 1 containers: [cc82106fc9da]
	I0729 10:55:33.297537    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:55:33.307613    8358 logs.go:276] 2 containers: [eee01c406f30 81df750d149b]
	I0729 10:55:33.307679    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:55:33.317605    8358 logs.go:276] 0 containers: []
	W0729 10:55:33.317617    8358 logs.go:278] No container was found matching "kindnet"
	I0729 10:55:33.317675    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:55:33.328223    8358 logs.go:276] 2 containers: [7610cd881aeb e2e7fb6e4b2d]
	I0729 10:55:33.328243    8358 logs.go:123] Gathering logs for coredns [fbff0b09b3af] ...
	I0729 10:55:33.328248    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff0b09b3af"
	I0729 10:55:33.340059    8358 logs.go:123] Gathering logs for kube-scheduler [a9a637b09ebc] ...
	I0729 10:55:33.340072    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9a637b09ebc"
	I0729 10:55:33.355567    8358 logs.go:123] Gathering logs for kubelet ...
	I0729 10:55:33.355578    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:55:33.394451    8358 logs.go:123] Gathering logs for etcd [4494551802a6] ...
	I0729 10:55:33.394460    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4494551802a6"
	I0729 10:55:33.409633    8358 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:55:33.409643    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:55:33.444004    8358 logs.go:123] Gathering logs for kube-controller-manager [81df750d149b] ...
	I0729 10:55:33.444016    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81df750d149b"
	I0729 10:55:33.467334    8358 logs.go:123] Gathering logs for storage-provisioner [7610cd881aeb] ...
	I0729 10:55:33.467347    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7610cd881aeb"
	I0729 10:55:33.479562    8358 logs.go:123] Gathering logs for storage-provisioner [e2e7fb6e4b2d] ...
	I0729 10:55:33.479576    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2e7fb6e4b2d"
	I0729 10:55:33.490855    8358 logs.go:123] Gathering logs for Docker ...
	I0729 10:55:33.490867    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:55:33.514865    8358 logs.go:123] Gathering logs for container status ...
	I0729 10:55:33.514872    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:55:33.527332    8358 logs.go:123] Gathering logs for etcd [68a6d48feaae] ...
	I0729 10:55:33.527344    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68a6d48feaae"
	I0729 10:55:33.541846    8358 logs.go:123] Gathering logs for kube-scheduler [8ad4866a16b3] ...
	I0729 10:55:33.541857    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ad4866a16b3"
	I0729 10:55:33.555229    8358 logs.go:123] Gathering logs for kube-apiserver [2afc138a6e36] ...
	I0729 10:55:33.555244    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2afc138a6e36"
	I0729 10:55:33.594341    8358 logs.go:123] Gathering logs for kube-proxy [cc82106fc9da] ...
	I0729 10:55:33.594353    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc82106fc9da"
	I0729 10:55:33.605852    8358 logs.go:123] Gathering logs for kube-controller-manager [eee01c406f30] ...
	I0729 10:55:33.605868    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eee01c406f30"
	I0729 10:55:33.622840    8358 logs.go:123] Gathering logs for dmesg ...
	I0729 10:55:33.622855    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:55:33.627553    8358 logs.go:123] Gathering logs for kube-apiserver [14a4ff9d95a0] ...
	I0729 10:55:33.627560    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14a4ff9d95a0"
	I0729 10:55:36.143527    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:55:41.144804    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:55:41.144963    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:55:41.160847    8358 logs.go:276] 2 containers: [14a4ff9d95a0 2afc138a6e36]
	I0729 10:55:41.160936    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:55:41.172861    8358 logs.go:276] 2 containers: [68a6d48feaae 4494551802a6]
	I0729 10:55:41.172942    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:55:41.183243    8358 logs.go:276] 1 containers: [fbff0b09b3af]
	I0729 10:55:41.183307    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:55:41.201744    8358 logs.go:276] 2 containers: [8ad4866a16b3 a9a637b09ebc]
	I0729 10:55:41.201821    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:55:41.212244    8358 logs.go:276] 1 containers: [cc82106fc9da]
	I0729 10:55:41.212313    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:55:41.223604    8358 logs.go:276] 2 containers: [eee01c406f30 81df750d149b]
	I0729 10:55:41.223671    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:55:41.241229    8358 logs.go:276] 0 containers: []
	W0729 10:55:41.241241    8358 logs.go:278] No container was found matching "kindnet"
	I0729 10:55:41.241303    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:55:41.252329    8358 logs.go:276] 2 containers: [7610cd881aeb e2e7fb6e4b2d]
	I0729 10:55:41.252347    8358 logs.go:123] Gathering logs for coredns [fbff0b09b3af] ...
	I0729 10:55:41.252354    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff0b09b3af"
	I0729 10:55:41.263723    8358 logs.go:123] Gathering logs for kube-scheduler [8ad4866a16b3] ...
	I0729 10:55:41.263736    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ad4866a16b3"
	I0729 10:55:41.275544    8358 logs.go:123] Gathering logs for kube-controller-manager [81df750d149b] ...
	I0729 10:55:41.275554    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81df750d149b"
	I0729 10:55:41.289154    8358 logs.go:123] Gathering logs for etcd [68a6d48feaae] ...
	I0729 10:55:41.289165    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68a6d48feaae"
	I0729 10:55:41.302838    8358 logs.go:123] Gathering logs for kube-apiserver [14a4ff9d95a0] ...
	I0729 10:55:41.302847    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14a4ff9d95a0"
	I0729 10:55:41.316543    8358 logs.go:123] Gathering logs for kube-apiserver [2afc138a6e36] ...
	I0729 10:55:41.316554    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2afc138a6e36"
	I0729 10:55:41.353666    8358 logs.go:123] Gathering logs for kube-scheduler [a9a637b09ebc] ...
	I0729 10:55:41.353676    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9a637b09ebc"
	I0729 10:55:41.369043    8358 logs.go:123] Gathering logs for kube-controller-manager [eee01c406f30] ...
	I0729 10:55:41.369054    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eee01c406f30"
	I0729 10:55:41.386432    8358 logs.go:123] Gathering logs for storage-provisioner [7610cd881aeb] ...
	I0729 10:55:41.386442    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7610cd881aeb"
	I0729 10:55:41.398286    8358 logs.go:123] Gathering logs for container status ...
	I0729 10:55:41.398298    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:55:41.410564    8358 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:55:41.410576    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:55:41.447487    8358 logs.go:123] Gathering logs for storage-provisioner [e2e7fb6e4b2d] ...
	I0729 10:55:41.447499    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2e7fb6e4b2d"
	I0729 10:55:41.460380    8358 logs.go:123] Gathering logs for dmesg ...
	I0729 10:55:41.460392    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:55:41.464533    8358 logs.go:123] Gathering logs for etcd [4494551802a6] ...
	I0729 10:55:41.464540    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4494551802a6"
	I0729 10:55:41.479245    8358 logs.go:123] Gathering logs for kube-proxy [cc82106fc9da] ...
	I0729 10:55:41.479256    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc82106fc9da"
	I0729 10:55:41.491443    8358 logs.go:123] Gathering logs for Docker ...
	I0729 10:55:41.491454    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:55:41.513954    8358 logs.go:123] Gathering logs for kubelet ...
	I0729 10:55:41.513960    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:55:44.052346    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:55:49.054590    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:55:49.054681    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:55:49.067856    8358 logs.go:276] 2 containers: [14a4ff9d95a0 2afc138a6e36]
	I0729 10:55:49.067932    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:55:49.078944    8358 logs.go:276] 2 containers: [68a6d48feaae 4494551802a6]
	I0729 10:55:49.079015    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:55:49.090914    8358 logs.go:276] 1 containers: [fbff0b09b3af]
	I0729 10:55:49.090991    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:55:49.101562    8358 logs.go:276] 2 containers: [8ad4866a16b3 a9a637b09ebc]
	I0729 10:55:49.101651    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:55:49.112357    8358 logs.go:276] 1 containers: [cc82106fc9da]
	I0729 10:55:49.112429    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:55:49.123272    8358 logs.go:276] 2 containers: [eee01c406f30 81df750d149b]
	I0729 10:55:49.123339    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:55:49.134011    8358 logs.go:276] 0 containers: []
	W0729 10:55:49.134026    8358 logs.go:278] No container was found matching "kindnet"
	I0729 10:55:49.134081    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:55:49.144585    8358 logs.go:276] 2 containers: [7610cd881aeb e2e7fb6e4b2d]
	I0729 10:55:49.144607    8358 logs.go:123] Gathering logs for etcd [68a6d48feaae] ...
	I0729 10:55:49.144613    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68a6d48feaae"
	I0729 10:55:49.158420    8358 logs.go:123] Gathering logs for kube-scheduler [8ad4866a16b3] ...
	I0729 10:55:49.158431    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ad4866a16b3"
	I0729 10:55:49.170247    8358 logs.go:123] Gathering logs for kube-proxy [cc82106fc9da] ...
	I0729 10:55:49.170258    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc82106fc9da"
	I0729 10:55:49.181572    8358 logs.go:123] Gathering logs for storage-provisioner [e2e7fb6e4b2d] ...
	I0729 10:55:49.181581    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2e7fb6e4b2d"
	I0729 10:55:49.192393    8358 logs.go:123] Gathering logs for Docker ...
	I0729 10:55:49.192404    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:55:49.215962    8358 logs.go:123] Gathering logs for dmesg ...
	I0729 10:55:49.215973    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:55:49.220212    8358 logs.go:123] Gathering logs for kube-apiserver [14a4ff9d95a0] ...
	I0729 10:55:49.220218    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14a4ff9d95a0"
	I0729 10:55:49.234425    8358 logs.go:123] Gathering logs for kube-apiserver [2afc138a6e36] ...
	I0729 10:55:49.234440    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2afc138a6e36"
	I0729 10:55:49.272991    8358 logs.go:123] Gathering logs for etcd [4494551802a6] ...
	I0729 10:55:49.273003    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4494551802a6"
	I0729 10:55:49.288317    8358 logs.go:123] Gathering logs for kube-controller-manager [81df750d149b] ...
	I0729 10:55:49.288328    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81df750d149b"
	I0729 10:55:49.301917    8358 logs.go:123] Gathering logs for storage-provisioner [7610cd881aeb] ...
	I0729 10:55:49.301928    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7610cd881aeb"
	I0729 10:55:49.313743    8358 logs.go:123] Gathering logs for kubelet ...
	I0729 10:55:49.313755    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:55:49.353055    8358 logs.go:123] Gathering logs for coredns [fbff0b09b3af] ...
	I0729 10:55:49.353066    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff0b09b3af"
	I0729 10:55:49.364742    8358 logs.go:123] Gathering logs for container status ...
	I0729 10:55:49.364754    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:55:49.376798    8358 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:55:49.376810    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:55:49.410638    8358 logs.go:123] Gathering logs for kube-scheduler [a9a637b09ebc] ...
	I0729 10:55:49.410649    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9a637b09ebc"
	I0729 10:55:49.426545    8358 logs.go:123] Gathering logs for kube-controller-manager [eee01c406f30] ...
	I0729 10:55:49.426558    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eee01c406f30"
	I0729 10:55:51.946399    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:55:56.948619    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:55:56.948870    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:55:56.967357    8358 logs.go:276] 2 containers: [14a4ff9d95a0 2afc138a6e36]
	I0729 10:55:56.967439    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:55:56.981460    8358 logs.go:276] 2 containers: [68a6d48feaae 4494551802a6]
	I0729 10:55:56.981530    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:55:56.993104    8358 logs.go:276] 1 containers: [fbff0b09b3af]
	I0729 10:55:56.993168    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:55:57.007315    8358 logs.go:276] 2 containers: [8ad4866a16b3 a9a637b09ebc]
	I0729 10:55:57.007382    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:55:57.017905    8358 logs.go:276] 1 containers: [cc82106fc9da]
	I0729 10:55:57.017969    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:55:57.028643    8358 logs.go:276] 2 containers: [eee01c406f30 81df750d149b]
	I0729 10:55:57.028710    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:55:57.039098    8358 logs.go:276] 0 containers: []
	W0729 10:55:57.039111    8358 logs.go:278] No container was found matching "kindnet"
	I0729 10:55:57.039166    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:55:57.049734    8358 logs.go:276] 2 containers: [7610cd881aeb e2e7fb6e4b2d]
	I0729 10:55:57.049757    8358 logs.go:123] Gathering logs for kubelet ...
	I0729 10:55:57.049763    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:55:57.086129    8358 logs.go:123] Gathering logs for kube-proxy [cc82106fc9da] ...
	I0729 10:55:57.086138    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc82106fc9da"
	I0729 10:55:57.097784    8358 logs.go:123] Gathering logs for kube-controller-manager [eee01c406f30] ...
	I0729 10:55:57.097794    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eee01c406f30"
	I0729 10:55:57.114704    8358 logs.go:123] Gathering logs for etcd [68a6d48feaae] ...
	I0729 10:55:57.114713    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68a6d48feaae"
	I0729 10:55:57.128511    8358 logs.go:123] Gathering logs for etcd [4494551802a6] ...
	I0729 10:55:57.128521    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4494551802a6"
	I0729 10:55:57.143223    8358 logs.go:123] Gathering logs for storage-provisioner [7610cd881aeb] ...
	I0729 10:55:57.143233    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7610cd881aeb"
	I0729 10:55:57.154730    8358 logs.go:123] Gathering logs for storage-provisioner [e2e7fb6e4b2d] ...
	I0729 10:55:57.154740    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2e7fb6e4b2d"
	I0729 10:55:57.169262    8358 logs.go:123] Gathering logs for container status ...
	I0729 10:55:57.169276    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:55:57.180880    8358 logs.go:123] Gathering logs for dmesg ...
	I0729 10:55:57.180889    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:55:57.185015    8358 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:55:57.185023    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:55:57.218953    8358 logs.go:123] Gathering logs for coredns [fbff0b09b3af] ...
	I0729 10:55:57.218963    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff0b09b3af"
	I0729 10:55:57.230901    8358 logs.go:123] Gathering logs for kube-scheduler [8ad4866a16b3] ...
	I0729 10:55:57.230911    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ad4866a16b3"
	I0729 10:55:57.243003    8358 logs.go:123] Gathering logs for kube-controller-manager [81df750d149b] ...
	I0729 10:55:57.243016    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81df750d149b"
	I0729 10:55:57.256940    8358 logs.go:123] Gathering logs for Docker ...
	I0729 10:55:57.256949    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:55:57.278393    8358 logs.go:123] Gathering logs for kube-apiserver [14a4ff9d95a0] ...
	I0729 10:55:57.278400    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14a4ff9d95a0"
	I0729 10:55:57.292790    8358 logs.go:123] Gathering logs for kube-apiserver [2afc138a6e36] ...
	I0729 10:55:57.292806    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2afc138a6e36"
	I0729 10:55:57.331626    8358 logs.go:123] Gathering logs for kube-scheduler [a9a637b09ebc] ...
	I0729 10:55:57.331637    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9a637b09ebc"
	I0729 10:55:59.849274    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:56:04.851596    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:56:04.851802    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:56:04.870921    8358 logs.go:276] 2 containers: [14a4ff9d95a0 2afc138a6e36]
	I0729 10:56:04.871023    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:56:04.889626    8358 logs.go:276] 2 containers: [68a6d48feaae 4494551802a6]
	I0729 10:56:04.889699    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:56:04.901192    8358 logs.go:276] 1 containers: [fbff0b09b3af]
	I0729 10:56:04.901272    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:56:04.911799    8358 logs.go:276] 2 containers: [8ad4866a16b3 a9a637b09ebc]
	I0729 10:56:04.911885    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:56:04.922779    8358 logs.go:276] 1 containers: [cc82106fc9da]
	I0729 10:56:04.922846    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:56:04.933853    8358 logs.go:276] 2 containers: [eee01c406f30 81df750d149b]
	I0729 10:56:04.933922    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:56:04.944566    8358 logs.go:276] 0 containers: []
	W0729 10:56:04.944583    8358 logs.go:278] No container was found matching "kindnet"
	I0729 10:56:04.944651    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:56:04.958385    8358 logs.go:276] 2 containers: [7610cd881aeb e2e7fb6e4b2d]
	I0729 10:56:04.958403    8358 logs.go:123] Gathering logs for kube-scheduler [a9a637b09ebc] ...
	I0729 10:56:04.958408    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9a637b09ebc"
	I0729 10:56:04.974261    8358 logs.go:123] Gathering logs for kube-controller-manager [eee01c406f30] ...
	I0729 10:56:04.974273    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eee01c406f30"
	I0729 10:56:04.993875    8358 logs.go:123] Gathering logs for storage-provisioner [7610cd881aeb] ...
	I0729 10:56:04.993885    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7610cd881aeb"
	I0729 10:56:05.014633    8358 logs.go:123] Gathering logs for dmesg ...
	I0729 10:56:05.014644    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:56:05.020193    8358 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:56:05.020208    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:56:05.068433    8358 logs.go:123] Gathering logs for kube-apiserver [14a4ff9d95a0] ...
	I0729 10:56:05.068445    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14a4ff9d95a0"
	I0729 10:56:05.082983    8358 logs.go:123] Gathering logs for kube-apiserver [2afc138a6e36] ...
	I0729 10:56:05.082994    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2afc138a6e36"
	I0729 10:56:05.124545    8358 logs.go:123] Gathering logs for etcd [68a6d48feaae] ...
	I0729 10:56:05.124556    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68a6d48feaae"
	I0729 10:56:05.138683    8358 logs.go:123] Gathering logs for Docker ...
	I0729 10:56:05.138694    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:56:05.161767    8358 logs.go:123] Gathering logs for container status ...
	I0729 10:56:05.161775    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:56:05.175298    8358 logs.go:123] Gathering logs for etcd [4494551802a6] ...
	I0729 10:56:05.175309    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4494551802a6"
	I0729 10:56:05.189745    8358 logs.go:123] Gathering logs for kube-proxy [cc82106fc9da] ...
	I0729 10:56:05.189755    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc82106fc9da"
	I0729 10:56:05.202103    8358 logs.go:123] Gathering logs for storage-provisioner [e2e7fb6e4b2d] ...
	I0729 10:56:05.202114    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2e7fb6e4b2d"
	I0729 10:56:05.217509    8358 logs.go:123] Gathering logs for kubelet ...
	I0729 10:56:05.217521    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:56:05.257578    8358 logs.go:123] Gathering logs for coredns [fbff0b09b3af] ...
	I0729 10:56:05.257596    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff0b09b3af"
	I0729 10:56:05.269030    8358 logs.go:123] Gathering logs for kube-scheduler [8ad4866a16b3] ...
	I0729 10:56:05.269043    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ad4866a16b3"
	I0729 10:56:05.280898    8358 logs.go:123] Gathering logs for kube-controller-manager [81df750d149b] ...
	I0729 10:56:05.280910    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81df750d149b"
	I0729 10:56:07.796518    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:56:12.798826    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:56:12.799067    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:56:12.825043    8358 logs.go:276] 2 containers: [14a4ff9d95a0 2afc138a6e36]
	I0729 10:56:12.825143    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:56:12.847686    8358 logs.go:276] 2 containers: [68a6d48feaae 4494551802a6]
	I0729 10:56:12.847759    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:56:12.860011    8358 logs.go:276] 1 containers: [fbff0b09b3af]
	I0729 10:56:12.860081    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:56:12.870536    8358 logs.go:276] 2 containers: [8ad4866a16b3 a9a637b09ebc]
	I0729 10:56:12.870608    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:56:12.880753    8358 logs.go:276] 1 containers: [cc82106fc9da]
	I0729 10:56:12.880822    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:56:12.895351    8358 logs.go:276] 2 containers: [eee01c406f30 81df750d149b]
	I0729 10:56:12.895415    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:56:12.905576    8358 logs.go:276] 0 containers: []
	W0729 10:56:12.905588    8358 logs.go:278] No container was found matching "kindnet"
	I0729 10:56:12.905653    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:56:12.920562    8358 logs.go:276] 2 containers: [7610cd881aeb e2e7fb6e4b2d]
	I0729 10:56:12.920585    8358 logs.go:123] Gathering logs for dmesg ...
	I0729 10:56:12.920591    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:56:12.925003    8358 logs.go:123] Gathering logs for etcd [68a6d48feaae] ...
	I0729 10:56:12.925011    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68a6d48feaae"
	I0729 10:56:12.939826    8358 logs.go:123] Gathering logs for coredns [fbff0b09b3af] ...
	I0729 10:56:12.939837    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff0b09b3af"
	I0729 10:56:12.951237    8358 logs.go:123] Gathering logs for kube-scheduler [a9a637b09ebc] ...
	I0729 10:56:12.951252    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9a637b09ebc"
	I0729 10:56:12.967257    8358 logs.go:123] Gathering logs for kube-controller-manager [eee01c406f30] ...
	I0729 10:56:12.967269    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eee01c406f30"
	I0729 10:56:12.984414    8358 logs.go:123] Gathering logs for container status ...
	I0729 10:56:12.984425    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:56:13.009995    8358 logs.go:123] Gathering logs for kubelet ...
	I0729 10:56:13.010006    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:56:13.052390    8358 logs.go:123] Gathering logs for etcd [4494551802a6] ...
	I0729 10:56:13.052412    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4494551802a6"
	I0729 10:56:13.073315    8358 logs.go:123] Gathering logs for kube-proxy [cc82106fc9da] ...
	I0729 10:56:13.073328    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc82106fc9da"
	I0729 10:56:13.085088    8358 logs.go:123] Gathering logs for kube-apiserver [14a4ff9d95a0] ...
	I0729 10:56:13.085102    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14a4ff9d95a0"
	I0729 10:56:13.105012    8358 logs.go:123] Gathering logs for kube-scheduler [8ad4866a16b3] ...
	I0729 10:56:13.105025    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ad4866a16b3"
	I0729 10:56:13.116732    8358 logs.go:123] Gathering logs for kube-controller-manager [81df750d149b] ...
	I0729 10:56:13.116743    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81df750d149b"
	I0729 10:56:13.130262    8358 logs.go:123] Gathering logs for Docker ...
	I0729 10:56:13.130277    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:56:13.151644    8358 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:56:13.151654    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:56:13.186951    8358 logs.go:123] Gathering logs for kube-apiserver [2afc138a6e36] ...
	I0729 10:56:13.186961    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2afc138a6e36"
	I0729 10:56:13.224491    8358 logs.go:123] Gathering logs for storage-provisioner [7610cd881aeb] ...
	I0729 10:56:13.224506    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7610cd881aeb"
	I0729 10:56:13.236411    8358 logs.go:123] Gathering logs for storage-provisioner [e2e7fb6e4b2d] ...
	I0729 10:56:13.236422    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2e7fb6e4b2d"
	I0729 10:56:15.750561    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:56:20.752844    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:56:20.752925    8358 kubeadm.go:597] duration metric: took 4m4.086944209s to restartPrimaryControlPlane
	W0729 10:56:20.752998    8358 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 10:56:20.753032    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0729 10:56:21.780724    8358 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.027695334s)
	I0729 10:56:21.780781    8358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 10:56:21.786054    8358 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 10:56:21.788783    8358 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 10:56:21.791508    8358 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 10:56:21.791516    8358 kubeadm.go:157] found existing configuration files:
	
	I0729 10:56:21.791539    8358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51474 /etc/kubernetes/admin.conf
	I0729 10:56:21.794222    8358 kubeadm.go:163] "https://control-plane.minikube.internal:51474" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51474 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 10:56:21.794246    8358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 10:56:21.796655    8358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51474 /etc/kubernetes/kubelet.conf
	I0729 10:56:21.799636    8358 kubeadm.go:163] "https://control-plane.minikube.internal:51474" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51474 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 10:56:21.799659    8358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 10:56:21.802497    8358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51474 /etc/kubernetes/controller-manager.conf
	I0729 10:56:21.804864    8358 kubeadm.go:163] "https://control-plane.minikube.internal:51474" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51474 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 10:56:21.804886    8358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 10:56:21.807724    8358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51474 /etc/kubernetes/scheduler.conf
	I0729 10:56:21.810608    8358 kubeadm.go:163] "https://control-plane.minikube.internal:51474" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51474 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 10:56:21.810631    8358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 10:56:21.813019    8358 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 10:56:21.830570    8358 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0729 10:56:21.830621    8358 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 10:56:21.879001    8358 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 10:56:21.879058    8358 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 10:56:21.879142    8358 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 10:56:21.927603    8358 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 10:56:21.930721    8358 out.go:204]   - Generating certificates and keys ...
	I0729 10:56:21.930762    8358 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 10:56:21.930793    8358 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 10:56:21.930831    8358 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 10:56:21.930862    8358 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 10:56:21.930900    8358 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 10:56:21.930929    8358 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 10:56:21.930993    8358 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 10:56:21.931041    8358 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 10:56:21.931085    8358 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 10:56:21.931153    8358 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 10:56:21.931182    8358 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 10:56:21.931217    8358 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 10:56:21.989187    8358 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 10:56:22.055605    8358 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 10:56:22.332504    8358 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 10:56:22.379098    8358 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 10:56:22.407936    8358 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 10:56:22.408289    8358 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 10:56:22.408313    8358 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 10:56:22.493563    8358 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 10:56:22.497726    8358 out.go:204]   - Booting up control plane ...
	I0729 10:56:22.497777    8358 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 10:56:22.497816    8358 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 10:56:22.498069    8358 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 10:56:22.498542    8358 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 10:56:22.499832    8358 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 10:56:27.003138    8358 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.503115 seconds
	I0729 10:56:27.003281    8358 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 10:56:27.007545    8358 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 10:56:27.516777    8358 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 10:56:27.516944    8358 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-294000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 10:56:28.021036    8358 kubeadm.go:310] [bootstrap-token] Using token: 7dco59.hhqt2q6ndro3ugx4
	I0729 10:56:28.023847    8358 out.go:204]   - Configuring RBAC rules ...
	I0729 10:56:28.023916    8358 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 10:56:28.023971    8358 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 10:56:28.028660    8358 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 10:56:28.029661    8358 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 10:56:28.030757    8358 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 10:56:28.031658    8358 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 10:56:28.035253    8358 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 10:56:28.219801    8358 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 10:56:28.424585    8358 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 10:56:28.425198    8358 kubeadm.go:310] 
	I0729 10:56:28.425300    8358 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 10:56:28.425329    8358 kubeadm.go:310] 
	I0729 10:56:28.425398    8358 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 10:56:28.425402    8358 kubeadm.go:310] 
	I0729 10:56:28.425436    8358 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 10:56:28.425471    8358 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 10:56:28.425495    8358 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 10:56:28.425497    8358 kubeadm.go:310] 
	I0729 10:56:28.425528    8358 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 10:56:28.425531    8358 kubeadm.go:310] 
	I0729 10:56:28.425557    8358 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 10:56:28.425561    8358 kubeadm.go:310] 
	I0729 10:56:28.425599    8358 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 10:56:28.425665    8358 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 10:56:28.425703    8358 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 10:56:28.425705    8358 kubeadm.go:310] 
	I0729 10:56:28.425767    8358 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 10:56:28.425808    8358 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 10:56:28.425811    8358 kubeadm.go:310] 
	I0729 10:56:28.425869    8358 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 7dco59.hhqt2q6ndro3ugx4 \
	I0729 10:56:28.425949    8358 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:8d6a503498cfac617ec351c4234f65718d8cbc12c41bd005a6931d270830028d \
	I0729 10:56:28.425961    8358 kubeadm.go:310] 	--control-plane 
	I0729 10:56:28.425964    8358 kubeadm.go:310] 
	I0729 10:56:28.426003    8358 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 10:56:28.426005    8358 kubeadm.go:310] 
	I0729 10:56:28.426045    8358 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 7dco59.hhqt2q6ndro3ugx4 \
	I0729 10:56:28.426096    8358 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:8d6a503498cfac617ec351c4234f65718d8cbc12c41bd005a6931d270830028d 
	I0729 10:56:28.426185    8358 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 10:56:28.426191    8358 cni.go:84] Creating CNI manager for ""
	I0729 10:56:28.426199    8358 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 10:56:28.429990    8358 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 10:56:28.436117    8358 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 10:56:28.439311    8358 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 10:56:28.445128    8358 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 10:56:28.445224    8358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-294000 minikube.k8s.io/updated_at=2024_07_29T10_56_28_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=de53afae5e8f4269438099f4dad14d93a8a17e35 minikube.k8s.io/name=stopped-upgrade-294000 minikube.k8s.io/primary=true
	I0729 10:56:28.445271    8358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:56:28.478168    8358 kubeadm.go:1113] duration metric: took 32.994083ms to wait for elevateKubeSystemPrivileges
	I0729 10:56:28.487697    8358 ops.go:34] apiserver oom_adj: -16
	I0729 10:56:28.487829    8358 kubeadm.go:394] duration metric: took 4m11.835398542s to StartCluster
	I0729 10:56:28.487842    8358 settings.go:142] acquiring lock: {Name:mk3ce889c5cdf5c514cbf9155d52acf6d279a087 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:56:28.487929    8358 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19339-6071/kubeconfig
	I0729 10:56:28.488336    8358 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19339-6071/kubeconfig: {Name:mkf75fdff2d3e918223b7f2dbeb4359c01007a16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:56:28.488554    8358 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:56:28.488604    8358 config.go:182] Loaded profile config "stopped-upgrade-294000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 10:56:28.488586    8358 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 10:56:28.488667    8358 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-294000"
	I0729 10:56:28.488681    8358 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-294000"
	W0729 10:56:28.488684    8358 addons.go:243] addon storage-provisioner should already be in state true
	I0729 10:56:28.488696    8358 host.go:66] Checking if "stopped-upgrade-294000" exists ...
	I0729 10:56:28.488702    8358 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-294000"
	I0729 10:56:28.488726    8358 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-294000"
	I0729 10:56:28.493081    8358 out.go:177] * Verifying Kubernetes components...
	I0729 10:56:28.493728    8358 kapi.go:59] client config for stopped-upgrade-294000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/stopped-upgrade-294000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/stopped-upgrade-294000/client.key", CAFile:"/Users/jenkins/minikube-integration/19339-6071/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1020c4080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 10:56:28.497306    8358 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-294000"
	W0729 10:56:28.497312    8358 addons.go:243] addon default-storageclass should already be in state true
	I0729 10:56:28.497320    8358 host.go:66] Checking if "stopped-upgrade-294000" exists ...
	I0729 10:56:28.497850    8358 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 10:56:28.497855    8358 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 10:56:28.497861    8358 sshutil.go:53] new ssh client: &{IP:localhost Port:51439 SSHKeyPath:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/stopped-upgrade-294000/id_rsa Username:docker}
	I0729 10:56:28.501065    8358 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 10:56:28.504106    8358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 10:56:28.508058    8358 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 10:56:28.508065    8358 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 10:56:28.508071    8358 sshutil.go:53] new ssh client: &{IP:localhost Port:51439 SSHKeyPath:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/stopped-upgrade-294000/id_rsa Username:docker}
	I0729 10:56:28.580255    8358 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 10:56:28.585943    8358 api_server.go:52] waiting for apiserver process to appear ...
	I0729 10:56:28.585988    8358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 10:56:28.589598    8358 api_server.go:72] duration metric: took 101.034625ms to wait for apiserver process to appear ...
	I0729 10:56:28.589606    8358 api_server.go:88] waiting for apiserver healthz status ...
	I0729 10:56:28.589613    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:56:28.630061    8358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 10:56:28.638193    8358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 10:56:33.591621    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:56:33.591661    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:56:38.591961    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:56:38.592023    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:56:43.592317    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:56:43.592349    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:56:48.592752    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:56:48.592784    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:56:53.593528    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:56:53.593573    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:56:58.593817    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:56:58.593860    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0729 10:56:58.968738    8358 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0729 10:56:58.973042    8358 out.go:177] * Enabled addons: storage-provisioner
	I0729 10:56:58.983969    8358 addons.go:510] duration metric: took 30.495914833s for enable addons: enabled=[storage-provisioner]
	I0729 10:57:03.594757    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:57:03.594779    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:57:08.596273    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:57:08.596321    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:57:13.597916    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:57:13.597938    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:57:18.599814    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:57:18.599840    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:57:23.600796    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:57:23.600843    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:57:28.603135    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:57:28.603288    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:57:28.614897    8358 logs.go:276] 1 containers: [2490def3c0ba]
	I0729 10:57:28.614969    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:57:28.625482    8358 logs.go:276] 1 containers: [468b83fd7685]
	I0729 10:57:28.625554    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:57:28.635755    8358 logs.go:276] 2 containers: [310fae3c4556 d576d5e5186f]
	I0729 10:57:28.635817    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:57:28.647405    8358 logs.go:276] 1 containers: [50ffad2915d6]
	I0729 10:57:28.647469    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:57:28.657920    8358 logs.go:276] 1 containers: [bbb2c4abdab6]
	I0729 10:57:28.658005    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:57:28.668212    8358 logs.go:276] 1 containers: [bcc51a2b7568]
	I0729 10:57:28.668281    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:57:28.678606    8358 logs.go:276] 0 containers: []
	W0729 10:57:28.678623    8358 logs.go:278] No container was found matching "kindnet"
	I0729 10:57:28.678683    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:57:28.689339    8358 logs.go:276] 1 containers: [b2779763fabc]
	I0729 10:57:28.689355    8358 logs.go:123] Gathering logs for coredns [d576d5e5186f] ...
	I0729 10:57:28.689361    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d576d5e5186f"
	I0729 10:57:28.701285    8358 logs.go:123] Gathering logs for kube-proxy [bbb2c4abdab6] ...
	I0729 10:57:28.701295    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbb2c4abdab6"
	I0729 10:57:28.712704    8358 logs.go:123] Gathering logs for kube-controller-manager [bcc51a2b7568] ...
	I0729 10:57:28.712718    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc51a2b7568"
	I0729 10:57:28.734467    8358 logs.go:123] Gathering logs for kubelet ...
	I0729 10:57:28.734479    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:57:28.768715    8358 logs.go:123] Gathering logs for kube-apiserver [2490def3c0ba] ...
	I0729 10:57:28.768726    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2490def3c0ba"
	I0729 10:57:28.782996    8358 logs.go:123] Gathering logs for etcd [468b83fd7685] ...
	I0729 10:57:28.783006    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 468b83fd7685"
	I0729 10:57:28.796744    8358 logs.go:123] Gathering logs for kube-scheduler [50ffad2915d6] ...
	I0729 10:57:28.796755    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50ffad2915d6"
	I0729 10:57:28.812592    8358 logs.go:123] Gathering logs for storage-provisioner [b2779763fabc] ...
	I0729 10:57:28.812602    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2779763fabc"
	I0729 10:57:28.824274    8358 logs.go:123] Gathering logs for Docker ...
	I0729 10:57:28.824286    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:57:28.849398    8358 logs.go:123] Gathering logs for container status ...
	I0729 10:57:28.849407    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:57:28.860447    8358 logs.go:123] Gathering logs for dmesg ...
	I0729 10:57:28.860456    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:57:28.865072    8358 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:57:28.865078    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:57:28.903716    8358 logs.go:123] Gathering logs for coredns [310fae3c4556] ...
	I0729 10:57:28.903730    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 310fae3c4556"
	I0729 10:57:31.417171    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:57:36.418964    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:57:36.419216    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:57:36.447894    8358 logs.go:276] 1 containers: [2490def3c0ba]
	I0729 10:57:36.448007    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:57:36.465666    8358 logs.go:276] 1 containers: [468b83fd7685]
	I0729 10:57:36.465742    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:57:36.479051    8358 logs.go:276] 2 containers: [310fae3c4556 d576d5e5186f]
	I0729 10:57:36.479121    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:57:36.490237    8358 logs.go:276] 1 containers: [50ffad2915d6]
	I0729 10:57:36.490300    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:57:36.500427    8358 logs.go:276] 1 containers: [bbb2c4abdab6]
	I0729 10:57:36.500501    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:57:36.510945    8358 logs.go:276] 1 containers: [bcc51a2b7568]
	I0729 10:57:36.511008    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:57:36.524233    8358 logs.go:276] 0 containers: []
	W0729 10:57:36.524248    8358 logs.go:278] No container was found matching "kindnet"
	I0729 10:57:36.524302    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:57:36.534496    8358 logs.go:276] 1 containers: [b2779763fabc]
	I0729 10:57:36.534510    8358 logs.go:123] Gathering logs for kube-controller-manager [bcc51a2b7568] ...
	I0729 10:57:36.534515    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc51a2b7568"
	I0729 10:57:36.551519    8358 logs.go:123] Gathering logs for kubelet ...
	I0729 10:57:36.551532    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:57:36.585504    8358 logs.go:123] Gathering logs for dmesg ...
	I0729 10:57:36.585517    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:57:36.589728    8358 logs.go:123] Gathering logs for kube-apiserver [2490def3c0ba] ...
	I0729 10:57:36.589736    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2490def3c0ba"
	I0729 10:57:36.603655    8358 logs.go:123] Gathering logs for etcd [468b83fd7685] ...
	I0729 10:57:36.603665    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 468b83fd7685"
	I0729 10:57:36.617241    8358 logs.go:123] Gathering logs for kube-scheduler [50ffad2915d6] ...
	I0729 10:57:36.617254    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50ffad2915d6"
	I0729 10:57:36.634138    8358 logs.go:123] Gathering logs for kube-proxy [bbb2c4abdab6] ...
	I0729 10:57:36.634151    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbb2c4abdab6"
	I0729 10:57:36.645356    8358 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:57:36.645368    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:57:36.680412    8358 logs.go:123] Gathering logs for coredns [310fae3c4556] ...
	I0729 10:57:36.680425    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 310fae3c4556"
	I0729 10:57:36.692403    8358 logs.go:123] Gathering logs for coredns [d576d5e5186f] ...
	I0729 10:57:36.692416    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d576d5e5186f"
	I0729 10:57:36.703735    8358 logs.go:123] Gathering logs for storage-provisioner [b2779763fabc] ...
	I0729 10:57:36.703747    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2779763fabc"
	I0729 10:57:36.716499    8358 logs.go:123] Gathering logs for Docker ...
	I0729 10:57:36.716512    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:57:36.740131    8358 logs.go:123] Gathering logs for container status ...
	I0729 10:57:36.740139    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:57:39.254111    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:57:44.256439    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:57:44.256799    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:57:44.294530    8358 logs.go:276] 1 containers: [2490def3c0ba]
	I0729 10:57:44.294662    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:57:44.317993    8358 logs.go:276] 1 containers: [468b83fd7685]
	I0729 10:57:44.318086    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:57:44.333069    8358 logs.go:276] 2 containers: [310fae3c4556 d576d5e5186f]
	I0729 10:57:44.333149    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:57:44.345071    8358 logs.go:276] 1 containers: [50ffad2915d6]
	I0729 10:57:44.345137    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:57:44.363300    8358 logs.go:276] 1 containers: [bbb2c4abdab6]
	I0729 10:57:44.363379    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:57:44.373615    8358 logs.go:276] 1 containers: [bcc51a2b7568]
	I0729 10:57:44.373683    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:57:44.384324    8358 logs.go:276] 0 containers: []
	W0729 10:57:44.384338    8358 logs.go:278] No container was found matching "kindnet"
	I0729 10:57:44.384389    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:57:44.397574    8358 logs.go:276] 1 containers: [b2779763fabc]
	I0729 10:57:44.397590    8358 logs.go:123] Gathering logs for Docker ...
	I0729 10:57:44.397596    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:57:44.422416    8358 logs.go:123] Gathering logs for container status ...
	I0729 10:57:44.422425    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:57:44.433785    8358 logs.go:123] Gathering logs for kubelet ...
	I0729 10:57:44.433795    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:57:44.469011    8358 logs.go:123] Gathering logs for kube-apiserver [2490def3c0ba] ...
	I0729 10:57:44.469027    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2490def3c0ba"
	I0729 10:57:44.483414    8358 logs.go:123] Gathering logs for coredns [310fae3c4556] ...
	I0729 10:57:44.483426    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 310fae3c4556"
	I0729 10:57:44.496139    8358 logs.go:123] Gathering logs for coredns [d576d5e5186f] ...
	I0729 10:57:44.496154    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d576d5e5186f"
	I0729 10:57:44.507774    8358 logs.go:123] Gathering logs for kube-scheduler [50ffad2915d6] ...
	I0729 10:57:44.507785    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50ffad2915d6"
	I0729 10:57:44.522887    8358 logs.go:123] Gathering logs for kube-proxy [bbb2c4abdab6] ...
	I0729 10:57:44.522902    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbb2c4abdab6"
	I0729 10:57:44.541739    8358 logs.go:123] Gathering logs for kube-controller-manager [bcc51a2b7568] ...
	I0729 10:57:44.541751    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc51a2b7568"
	I0729 10:57:44.559624    8358 logs.go:123] Gathering logs for storage-provisioner [b2779763fabc] ...
	I0729 10:57:44.559636    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2779763fabc"
	I0729 10:57:44.571199    8358 logs.go:123] Gathering logs for dmesg ...
	I0729 10:57:44.571210    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:57:44.575441    8358 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:57:44.575450    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:57:44.611888    8358 logs.go:123] Gathering logs for etcd [468b83fd7685] ...
	I0729 10:57:44.611901    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 468b83fd7685"
	I0729 10:57:47.127624    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:57:52.127852    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:57:52.127930    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:57:52.141997    8358 logs.go:276] 1 containers: [2490def3c0ba]
	I0729 10:57:52.142069    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:57:52.155058    8358 logs.go:276] 1 containers: [468b83fd7685]
	I0729 10:57:52.155118    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:57:52.166225    8358 logs.go:276] 2 containers: [310fae3c4556 d576d5e5186f]
	I0729 10:57:52.166282    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:57:52.177267    8358 logs.go:276] 1 containers: [50ffad2915d6]
	I0729 10:57:52.177328    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:57:52.188579    8358 logs.go:276] 1 containers: [bbb2c4abdab6]
	I0729 10:57:52.188636    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:57:52.200902    8358 logs.go:276] 1 containers: [bcc51a2b7568]
	I0729 10:57:52.200967    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:57:52.212997    8358 logs.go:276] 0 containers: []
	W0729 10:57:52.213010    8358 logs.go:278] No container was found matching "kindnet"
	I0729 10:57:52.213066    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:57:52.225834    8358 logs.go:276] 1 containers: [b2779763fabc]
	I0729 10:57:52.225849    8358 logs.go:123] Gathering logs for coredns [310fae3c4556] ...
	I0729 10:57:52.225856    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 310fae3c4556"
	I0729 10:57:52.240113    8358 logs.go:123] Gathering logs for coredns [d576d5e5186f] ...
	I0729 10:57:52.240126    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d576d5e5186f"
	I0729 10:57:52.258691    8358 logs.go:123] Gathering logs for kube-scheduler [50ffad2915d6] ...
	I0729 10:57:52.258703    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50ffad2915d6"
	I0729 10:57:52.279919    8358 logs.go:123] Gathering logs for kube-proxy [bbb2c4abdab6] ...
	I0729 10:57:52.279928    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbb2c4abdab6"
	I0729 10:57:52.291588    8358 logs.go:123] Gathering logs for container status ...
	I0729 10:57:52.291603    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:57:52.310219    8358 logs.go:123] Gathering logs for kube-apiserver [2490def3c0ba] ...
	I0729 10:57:52.310231    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2490def3c0ba"
	I0729 10:57:52.324103    8358 logs.go:123] Gathering logs for dmesg ...
	I0729 10:57:52.324114    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:57:52.328917    8358 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:57:52.328923    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:57:52.364210    8358 logs.go:123] Gathering logs for etcd [468b83fd7685] ...
	I0729 10:57:52.364223    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 468b83fd7685"
	I0729 10:57:52.378564    8358 logs.go:123] Gathering logs for kube-controller-manager [bcc51a2b7568] ...
	I0729 10:57:52.378577    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc51a2b7568"
	I0729 10:57:52.395742    8358 logs.go:123] Gathering logs for storage-provisioner [b2779763fabc] ...
	I0729 10:57:52.395753    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2779763fabc"
	I0729 10:57:52.407277    8358 logs.go:123] Gathering logs for Docker ...
	I0729 10:57:52.407287    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:57:52.431827    8358 logs.go:123] Gathering logs for kubelet ...
	I0729 10:57:52.431840    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:57:54.968556    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:57:59.970941    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:57:59.971242    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:58:00.000885    8358 logs.go:276] 1 containers: [2490def3c0ba]
	I0729 10:58:00.001004    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:58:00.019615    8358 logs.go:276] 1 containers: [468b83fd7685]
	I0729 10:58:00.019709    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:58:00.032604    8358 logs.go:276] 2 containers: [310fae3c4556 d576d5e5186f]
	I0729 10:58:00.032694    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:58:00.046504    8358 logs.go:276] 1 containers: [50ffad2915d6]
	I0729 10:58:00.046566    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:58:00.056956    8358 logs.go:276] 1 containers: [bbb2c4abdab6]
	I0729 10:58:00.057032    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:58:00.067297    8358 logs.go:276] 1 containers: [bcc51a2b7568]
	I0729 10:58:00.067363    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:58:00.077093    8358 logs.go:276] 0 containers: []
	W0729 10:58:00.077105    8358 logs.go:278] No container was found matching "kindnet"
	I0729 10:58:00.077163    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:58:00.087202    8358 logs.go:276] 1 containers: [b2779763fabc]
	I0729 10:58:00.087218    8358 logs.go:123] Gathering logs for kube-scheduler [50ffad2915d6] ...
	I0729 10:58:00.087223    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50ffad2915d6"
	I0729 10:58:00.102155    8358 logs.go:123] Gathering logs for kube-controller-manager [bcc51a2b7568] ...
	I0729 10:58:00.102164    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc51a2b7568"
	I0729 10:58:00.119036    8358 logs.go:123] Gathering logs for Docker ...
	I0729 10:58:00.119047    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:58:00.143690    8358 logs.go:123] Gathering logs for kube-apiserver [2490def3c0ba] ...
	I0729 10:58:00.143700    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2490def3c0ba"
	I0729 10:58:00.157984    8358 logs.go:123] Gathering logs for coredns [310fae3c4556] ...
	I0729 10:58:00.157994    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 310fae3c4556"
	I0729 10:58:00.170176    8358 logs.go:123] Gathering logs for coredns [d576d5e5186f] ...
	I0729 10:58:00.170186    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d576d5e5186f"
	I0729 10:58:00.183594    8358 logs.go:123] Gathering logs for etcd [468b83fd7685] ...
	I0729 10:58:00.183604    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 468b83fd7685"
	I0729 10:58:00.197042    8358 logs.go:123] Gathering logs for kube-proxy [bbb2c4abdab6] ...
	I0729 10:58:00.197053    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbb2c4abdab6"
	I0729 10:58:00.208739    8358 logs.go:123] Gathering logs for storage-provisioner [b2779763fabc] ...
	I0729 10:58:00.208752    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2779763fabc"
	I0729 10:58:00.220229    8358 logs.go:123] Gathering logs for container status ...
	I0729 10:58:00.220241    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:58:00.231964    8358 logs.go:123] Gathering logs for kubelet ...
	I0729 10:58:00.231975    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:58:00.266953    8358 logs.go:123] Gathering logs for dmesg ...
	I0729 10:58:00.266962    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:58:00.271267    8358 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:58:00.271272    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:58:02.811255    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:58:07.813632    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:58:07.813771    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:58:07.834656    8358 logs.go:276] 1 containers: [2490def3c0ba]
	I0729 10:58:07.834712    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:58:07.848866    8358 logs.go:276] 1 containers: [468b83fd7685]
	I0729 10:58:07.848945    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:58:07.861838    8358 logs.go:276] 2 containers: [310fae3c4556 d576d5e5186f]
	I0729 10:58:07.861891    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:58:07.873153    8358 logs.go:276] 1 containers: [50ffad2915d6]
	I0729 10:58:07.873219    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:58:07.883794    8358 logs.go:276] 1 containers: [bbb2c4abdab6]
	I0729 10:58:07.883862    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:58:07.894113    8358 logs.go:276] 1 containers: [bcc51a2b7568]
	I0729 10:58:07.894177    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:58:07.907418    8358 logs.go:276] 0 containers: []
	W0729 10:58:07.907429    8358 logs.go:278] No container was found matching "kindnet"
	I0729 10:58:07.907469    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:58:07.918791    8358 logs.go:276] 1 containers: [b2779763fabc]
	I0729 10:58:07.918809    8358 logs.go:123] Gathering logs for kube-controller-manager [bcc51a2b7568] ...
	I0729 10:58:07.918813    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc51a2b7568"
	I0729 10:58:07.944154    8358 logs.go:123] Gathering logs for Docker ...
	I0729 10:58:07.944168    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:58:07.969066    8358 logs.go:123] Gathering logs for kubelet ...
	I0729 10:58:07.969077    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:58:08.003325    8358 logs.go:123] Gathering logs for etcd [468b83fd7685] ...
	I0729 10:58:08.003333    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 468b83fd7685"
	I0729 10:58:08.021301    8358 logs.go:123] Gathering logs for kube-apiserver [2490def3c0ba] ...
	I0729 10:58:08.021312    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2490def3c0ba"
	I0729 10:58:08.035834    8358 logs.go:123] Gathering logs for coredns [310fae3c4556] ...
	I0729 10:58:08.035843    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 310fae3c4556"
	I0729 10:58:08.051505    8358 logs.go:123] Gathering logs for coredns [d576d5e5186f] ...
	I0729 10:58:08.051518    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d576d5e5186f"
	I0729 10:58:08.062552    8358 logs.go:123] Gathering logs for kube-scheduler [50ffad2915d6] ...
	I0729 10:58:08.062564    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50ffad2915d6"
	I0729 10:58:08.077474    8358 logs.go:123] Gathering logs for kube-proxy [bbb2c4abdab6] ...
	I0729 10:58:08.077486    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbb2c4abdab6"
	I0729 10:58:08.089017    8358 logs.go:123] Gathering logs for storage-provisioner [b2779763fabc] ...
	I0729 10:58:08.089027    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2779763fabc"
	I0729 10:58:08.100434    8358 logs.go:123] Gathering logs for dmesg ...
	I0729 10:58:08.100445    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:58:08.105011    8358 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:58:08.105017    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:58:08.139370    8358 logs.go:123] Gathering logs for container status ...
	I0729 10:58:08.139381    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:58:10.652830    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:58:15.655271    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:58:15.655497    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:58:15.677480    8358 logs.go:276] 1 containers: [2490def3c0ba]
	I0729 10:58:15.677599    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:58:15.692593    8358 logs.go:276] 1 containers: [468b83fd7685]
	I0729 10:58:15.692681    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:58:15.705475    8358 logs.go:276] 2 containers: [310fae3c4556 d576d5e5186f]
	I0729 10:58:15.705543    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:58:15.716775    8358 logs.go:276] 1 containers: [50ffad2915d6]
	I0729 10:58:15.716842    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:58:15.727472    8358 logs.go:276] 1 containers: [bbb2c4abdab6]
	I0729 10:58:15.727537    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:58:15.738300    8358 logs.go:276] 1 containers: [bcc51a2b7568]
	I0729 10:58:15.738365    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:58:15.748603    8358 logs.go:276] 0 containers: []
	W0729 10:58:15.748615    8358 logs.go:278] No container was found matching "kindnet"
	I0729 10:58:15.748670    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:58:15.759264    8358 logs.go:276] 1 containers: [b2779763fabc]
	I0729 10:58:15.759278    8358 logs.go:123] Gathering logs for dmesg ...
	I0729 10:58:15.759284    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:58:15.763844    8358 logs.go:123] Gathering logs for coredns [310fae3c4556] ...
	I0729 10:58:15.763853    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 310fae3c4556"
	I0729 10:58:15.775501    8358 logs.go:123] Gathering logs for coredns [d576d5e5186f] ...
	I0729 10:58:15.775513    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d576d5e5186f"
	I0729 10:58:15.789824    8358 logs.go:123] Gathering logs for kube-controller-manager [bcc51a2b7568] ...
	I0729 10:58:15.789838    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc51a2b7568"
	I0729 10:58:15.806830    8358 logs.go:123] Gathering logs for Docker ...
	I0729 10:58:15.806843    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:58:15.830043    8358 logs.go:123] Gathering logs for container status ...
	I0729 10:58:15.830052    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:58:15.840901    8358 logs.go:123] Gathering logs for storage-provisioner [b2779763fabc] ...
	I0729 10:58:15.840911    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2779763fabc"
	I0729 10:58:15.852389    8358 logs.go:123] Gathering logs for kubelet ...
	I0729 10:58:15.852401    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:58:15.886570    8358 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:58:15.886576    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:58:15.920317    8358 logs.go:123] Gathering logs for kube-apiserver [2490def3c0ba] ...
	I0729 10:58:15.920329    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2490def3c0ba"
	I0729 10:58:15.934748    8358 logs.go:123] Gathering logs for etcd [468b83fd7685] ...
	I0729 10:58:15.934782    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 468b83fd7685"
	I0729 10:58:15.948136    8358 logs.go:123] Gathering logs for kube-scheduler [50ffad2915d6] ...
	I0729 10:58:15.948144    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50ffad2915d6"
	I0729 10:58:15.964360    8358 logs.go:123] Gathering logs for kube-proxy [bbb2c4abdab6] ...
	I0729 10:58:15.964374    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbb2c4abdab6"
	I0729 10:58:18.478218    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:58:23.480540    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:58:23.480761    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:58:23.509836    8358 logs.go:276] 1 containers: [2490def3c0ba]
	I0729 10:58:23.509910    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:58:23.524510    8358 logs.go:276] 1 containers: [468b83fd7685]
	I0729 10:58:23.524578    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:58:23.536198    8358 logs.go:276] 2 containers: [310fae3c4556 d576d5e5186f]
	I0729 10:58:23.536260    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:58:23.547057    8358 logs.go:276] 1 containers: [50ffad2915d6]
	I0729 10:58:23.547128    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:58:23.557738    8358 logs.go:276] 1 containers: [bbb2c4abdab6]
	I0729 10:58:23.557799    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:58:23.568984    8358 logs.go:276] 1 containers: [bcc51a2b7568]
	I0729 10:58:23.569044    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:58:23.579268    8358 logs.go:276] 0 containers: []
	W0729 10:58:23.579280    8358 logs.go:278] No container was found matching "kindnet"
	I0729 10:58:23.579328    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:58:23.589896    8358 logs.go:276] 1 containers: [b2779763fabc]
	I0729 10:58:23.589914    8358 logs.go:123] Gathering logs for kube-proxy [bbb2c4abdab6] ...
	I0729 10:58:23.589919    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbb2c4abdab6"
	I0729 10:58:23.602071    8358 logs.go:123] Gathering logs for kubelet ...
	I0729 10:58:23.602081    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:58:23.636702    8358 logs.go:123] Gathering logs for dmesg ...
	I0729 10:58:23.636711    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:58:23.640899    8358 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:58:23.640905    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:58:23.680453    8358 logs.go:123] Gathering logs for kube-apiserver [2490def3c0ba] ...
	I0729 10:58:23.680466    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2490def3c0ba"
	I0729 10:58:23.695573    8358 logs.go:123] Gathering logs for coredns [310fae3c4556] ...
	I0729 10:58:23.695584    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 310fae3c4556"
	I0729 10:58:23.707255    8358 logs.go:123] Gathering logs for coredns [d576d5e5186f] ...
	I0729 10:58:23.707266    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d576d5e5186f"
	I0729 10:58:23.719204    8358 logs.go:123] Gathering logs for kube-scheduler [50ffad2915d6] ...
	I0729 10:58:23.719218    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50ffad2915d6"
	I0729 10:58:23.734095    8358 logs.go:123] Gathering logs for kube-controller-manager [bcc51a2b7568] ...
	I0729 10:58:23.734108    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc51a2b7568"
	I0729 10:58:23.751013    8358 logs.go:123] Gathering logs for storage-provisioner [b2779763fabc] ...
	I0729 10:58:23.751024    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2779763fabc"
	I0729 10:58:23.762567    8358 logs.go:123] Gathering logs for Docker ...
	I0729 10:58:23.762579    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:58:23.786685    8358 logs.go:123] Gathering logs for etcd [468b83fd7685] ...
	I0729 10:58:23.786692    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 468b83fd7685"
	I0729 10:58:23.804902    8358 logs.go:123] Gathering logs for container status ...
	I0729 10:58:23.804913    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:58:26.318261    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:58:31.319623    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:58:31.319872    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:58:31.344811    8358 logs.go:276] 1 containers: [2490def3c0ba]
	I0729 10:58:31.344899    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:58:31.359377    8358 logs.go:276] 1 containers: [468b83fd7685]
	I0729 10:58:31.359444    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:58:31.371433    8358 logs.go:276] 2 containers: [310fae3c4556 d576d5e5186f]
	I0729 10:58:31.371501    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:58:31.382402    8358 logs.go:276] 1 containers: [50ffad2915d6]
	I0729 10:58:31.382462    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:58:31.393313    8358 logs.go:276] 1 containers: [bbb2c4abdab6]
	I0729 10:58:31.393381    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:58:31.404955    8358 logs.go:276] 1 containers: [bcc51a2b7568]
	I0729 10:58:31.405018    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:58:31.415671    8358 logs.go:276] 0 containers: []
	W0729 10:58:31.415688    8358 logs.go:278] No container was found matching "kindnet"
	I0729 10:58:31.415747    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:58:31.428734    8358 logs.go:276] 1 containers: [b2779763fabc]
	I0729 10:58:31.428748    8358 logs.go:123] Gathering logs for coredns [d576d5e5186f] ...
	I0729 10:58:31.428752    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d576d5e5186f"
	I0729 10:58:31.444152    8358 logs.go:123] Gathering logs for kube-scheduler [50ffad2915d6] ...
	I0729 10:58:31.444162    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50ffad2915d6"
	I0729 10:58:31.458935    8358 logs.go:123] Gathering logs for kube-proxy [bbb2c4abdab6] ...
	I0729 10:58:31.458947    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbb2c4abdab6"
	I0729 10:58:31.471342    8358 logs.go:123] Gathering logs for kube-controller-manager [bcc51a2b7568] ...
	I0729 10:58:31.471354    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc51a2b7568"
	I0729 10:58:31.489045    8358 logs.go:123] Gathering logs for kubelet ...
	I0729 10:58:31.489056    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:58:31.521674    8358 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:58:31.521682    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:58:31.556395    8358 logs.go:123] Gathering logs for etcd [468b83fd7685] ...
	I0729 10:58:31.556409    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 468b83fd7685"
	I0729 10:58:31.570836    8358 logs.go:123] Gathering logs for coredns [310fae3c4556] ...
	I0729 10:58:31.570848    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 310fae3c4556"
	I0729 10:58:31.583070    8358 logs.go:123] Gathering logs for storage-provisioner [b2779763fabc] ...
	I0729 10:58:31.583081    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2779763fabc"
	I0729 10:58:31.595114    8358 logs.go:123] Gathering logs for Docker ...
	I0729 10:58:31.595125    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:58:31.618441    8358 logs.go:123] Gathering logs for container status ...
	I0729 10:58:31.618451    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:58:31.630165    8358 logs.go:123] Gathering logs for dmesg ...
	I0729 10:58:31.630175    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:58:31.634390    8358 logs.go:123] Gathering logs for kube-apiserver [2490def3c0ba] ...
	I0729 10:58:31.634399    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2490def3c0ba"
	I0729 10:58:34.155582    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:58:39.158172    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:58:39.158349    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:58:39.171475    8358 logs.go:276] 1 containers: [2490def3c0ba]
	I0729 10:58:39.171544    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:58:39.182963    8358 logs.go:276] 1 containers: [468b83fd7685]
	I0729 10:58:39.183029    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:58:39.194209    8358 logs.go:276] 2 containers: [310fae3c4556 d576d5e5186f]
	I0729 10:58:39.194281    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:58:39.205117    8358 logs.go:276] 1 containers: [50ffad2915d6]
	I0729 10:58:39.205186    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:58:39.216402    8358 logs.go:276] 1 containers: [bbb2c4abdab6]
	I0729 10:58:39.216467    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:58:39.227615    8358 logs.go:276] 1 containers: [bcc51a2b7568]
	I0729 10:58:39.227678    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:58:39.238857    8358 logs.go:276] 0 containers: []
	W0729 10:58:39.238868    8358 logs.go:278] No container was found matching "kindnet"
	I0729 10:58:39.238917    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:58:39.249715    8358 logs.go:276] 1 containers: [b2779763fabc]
	I0729 10:58:39.249729    8358 logs.go:123] Gathering logs for container status ...
	I0729 10:58:39.249735    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:58:39.261581    8358 logs.go:123] Gathering logs for kubelet ...
	I0729 10:58:39.261595    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:58:39.294057    8358 logs.go:123] Gathering logs for kube-apiserver [2490def3c0ba] ...
	I0729 10:58:39.294063    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2490def3c0ba"
	I0729 10:58:39.308731    8358 logs.go:123] Gathering logs for coredns [d576d5e5186f] ...
	I0729 10:58:39.308743    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d576d5e5186f"
	I0729 10:58:39.320720    8358 logs.go:123] Gathering logs for kube-scheduler [50ffad2915d6] ...
	I0729 10:58:39.320731    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50ffad2915d6"
	I0729 10:58:39.337084    8358 logs.go:123] Gathering logs for Docker ...
	I0729 10:58:39.337095    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:58:39.360296    8358 logs.go:123] Gathering logs for kube-controller-manager [bcc51a2b7568] ...
	I0729 10:58:39.360304    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc51a2b7568"
	I0729 10:58:39.377628    8358 logs.go:123] Gathering logs for storage-provisioner [b2779763fabc] ...
	I0729 10:58:39.377637    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2779763fabc"
	I0729 10:58:39.390018    8358 logs.go:123] Gathering logs for dmesg ...
	I0729 10:58:39.390030    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:58:39.394274    8358 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:58:39.394285    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:58:39.429149    8358 logs.go:123] Gathering logs for etcd [468b83fd7685] ...
	I0729 10:58:39.429160    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 468b83fd7685"
	I0729 10:58:39.443686    8358 logs.go:123] Gathering logs for coredns [310fae3c4556] ...
	I0729 10:58:39.443697    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 310fae3c4556"
	I0729 10:58:39.458517    8358 logs.go:123] Gathering logs for kube-proxy [bbb2c4abdab6] ...
	I0729 10:58:39.458527    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbb2c4abdab6"
	I0729 10:58:41.970996    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:58:46.973243    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:58:46.973625    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:58:47.011827    8358 logs.go:276] 1 containers: [2490def3c0ba]
	I0729 10:58:47.011933    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:58:47.031700    8358 logs.go:276] 1 containers: [468b83fd7685]
	I0729 10:58:47.031786    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:58:47.045117    8358 logs.go:276] 4 containers: [4206a0f3c3f5 bc963fcc3a9f 310fae3c4556 d576d5e5186f]
	I0729 10:58:47.045189    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:58:47.056085    8358 logs.go:276] 1 containers: [50ffad2915d6]
	I0729 10:58:47.056144    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:58:47.066553    8358 logs.go:276] 1 containers: [bbb2c4abdab6]
	I0729 10:58:47.066620    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:58:47.079590    8358 logs.go:276] 1 containers: [bcc51a2b7568]
	I0729 10:58:47.079658    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:58:47.090579    8358 logs.go:276] 0 containers: []
	W0729 10:58:47.090589    8358 logs.go:278] No container was found matching "kindnet"
	I0729 10:58:47.090636    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:58:47.101921    8358 logs.go:276] 1 containers: [b2779763fabc]
	I0729 10:58:47.101940    8358 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:58:47.101945    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:58:47.135736    8358 logs.go:123] Gathering logs for etcd [468b83fd7685] ...
	I0729 10:58:47.135748    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 468b83fd7685"
	I0729 10:58:47.150082    8358 logs.go:123] Gathering logs for coredns [4206a0f3c3f5] ...
	I0729 10:58:47.150092    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4206a0f3c3f5"
	I0729 10:58:47.165234    8358 logs.go:123] Gathering logs for coredns [310fae3c4556] ...
	I0729 10:58:47.165245    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 310fae3c4556"
	I0729 10:58:47.177553    8358 logs.go:123] Gathering logs for dmesg ...
	I0729 10:58:47.177564    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:58:47.181714    8358 logs.go:123] Gathering logs for kube-proxy [bbb2c4abdab6] ...
	I0729 10:58:47.181723    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbb2c4abdab6"
	I0729 10:58:47.193629    8358 logs.go:123] Gathering logs for kube-controller-manager [bcc51a2b7568] ...
	I0729 10:58:47.193638    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc51a2b7568"
	I0729 10:58:47.211144    8358 logs.go:123] Gathering logs for kube-apiserver [2490def3c0ba] ...
	I0729 10:58:47.211155    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2490def3c0ba"
	I0729 10:58:47.229802    8358 logs.go:123] Gathering logs for coredns [d576d5e5186f] ...
	I0729 10:58:47.229814    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d576d5e5186f"
	I0729 10:58:47.241715    8358 logs.go:123] Gathering logs for kubelet ...
	I0729 10:58:47.241724    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:58:47.275730    8358 logs.go:123] Gathering logs for coredns [bc963fcc3a9f] ...
	I0729 10:58:47.275740    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc963fcc3a9f"
	I0729 10:58:47.294788    8358 logs.go:123] Gathering logs for kube-scheduler [50ffad2915d6] ...
	I0729 10:58:47.294800    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50ffad2915d6"
	I0729 10:58:47.310406    8358 logs.go:123] Gathering logs for storage-provisioner [b2779763fabc] ...
	I0729 10:58:47.310416    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2779763fabc"
	I0729 10:58:47.322004    8358 logs.go:123] Gathering logs for Docker ...
	I0729 10:58:47.322014    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:58:47.345261    8358 logs.go:123] Gathering logs for container status ...
	I0729 10:58:47.345270    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:58:49.859123    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:58:54.861551    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:58:54.861651    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:58:54.872528    8358 logs.go:276] 1 containers: [2490def3c0ba]
	I0729 10:58:54.872581    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:58:54.884197    8358 logs.go:276] 1 containers: [468b83fd7685]
	I0729 10:58:54.884264    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:58:54.896716    8358 logs.go:276] 4 containers: [4206a0f3c3f5 bc963fcc3a9f 310fae3c4556 d576d5e5186f]
	I0729 10:58:54.896775    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:58:54.909227    8358 logs.go:276] 1 containers: [50ffad2915d6]
	I0729 10:58:54.909286    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:58:54.920885    8358 logs.go:276] 1 containers: [bbb2c4abdab6]
	I0729 10:58:54.920949    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:58:54.932496    8358 logs.go:276] 1 containers: [bcc51a2b7568]
	I0729 10:58:54.932556    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:58:54.944049    8358 logs.go:276] 0 containers: []
	W0729 10:58:54.944060    8358 logs.go:278] No container was found matching "kindnet"
	I0729 10:58:54.944097    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:58:54.961091    8358 logs.go:276] 1 containers: [b2779763fabc]
	I0729 10:58:54.961110    8358 logs.go:123] Gathering logs for coredns [bc963fcc3a9f] ...
	I0729 10:58:54.961116    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc963fcc3a9f"
	I0729 10:58:54.974436    8358 logs.go:123] Gathering logs for kubelet ...
	I0729 10:58:54.974448    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:58:55.008292    8358 logs.go:123] Gathering logs for coredns [4206a0f3c3f5] ...
	I0729 10:58:55.008310    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4206a0f3c3f5"
	I0729 10:58:55.021234    8358 logs.go:123] Gathering logs for coredns [d576d5e5186f] ...
	I0729 10:58:55.021248    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d576d5e5186f"
	I0729 10:58:55.034531    8358 logs.go:123] Gathering logs for kube-controller-manager [bcc51a2b7568] ...
	I0729 10:58:55.034543    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc51a2b7568"
	I0729 10:58:55.052661    8358 logs.go:123] Gathering logs for storage-provisioner [b2779763fabc] ...
	I0729 10:58:55.052674    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2779763fabc"
	I0729 10:58:55.065850    8358 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:58:55.065865    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:58:55.103427    8358 logs.go:123] Gathering logs for kube-apiserver [2490def3c0ba] ...
	I0729 10:58:55.103444    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2490def3c0ba"
	I0729 10:58:55.119101    8358 logs.go:123] Gathering logs for container status ...
	I0729 10:58:55.119116    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:58:55.135662    8358 logs.go:123] Gathering logs for dmesg ...
	I0729 10:58:55.135674    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:58:55.140644    8358 logs.go:123] Gathering logs for etcd [468b83fd7685] ...
	I0729 10:58:55.140655    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 468b83fd7685"
	I0729 10:58:55.155508    8358 logs.go:123] Gathering logs for coredns [310fae3c4556] ...
	I0729 10:58:55.155521    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 310fae3c4556"
	I0729 10:58:55.175416    8358 logs.go:123] Gathering logs for kube-scheduler [50ffad2915d6] ...
	I0729 10:58:55.175428    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50ffad2915d6"
	I0729 10:58:55.192363    8358 logs.go:123] Gathering logs for kube-proxy [bbb2c4abdab6] ...
	I0729 10:58:55.192376    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbb2c4abdab6"
	I0729 10:58:55.205742    8358 logs.go:123] Gathering logs for Docker ...
	I0729 10:58:55.205750    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:58:57.732790    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:59:02.735601    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:59:02.736052    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:59:02.780896    8358 logs.go:276] 1 containers: [2490def3c0ba]
	I0729 10:59:02.781028    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:59:02.802331    8358 logs.go:276] 1 containers: [468b83fd7685]
	I0729 10:59:02.802443    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:59:02.816941    8358 logs.go:276] 4 containers: [4206a0f3c3f5 bc963fcc3a9f 310fae3c4556 d576d5e5186f]
	I0729 10:59:02.817026    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:59:02.838353    8358 logs.go:276] 1 containers: [50ffad2915d6]
	I0729 10:59:02.838423    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:59:02.848806    8358 logs.go:276] 1 containers: [bbb2c4abdab6]
	I0729 10:59:02.848874    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:59:02.866093    8358 logs.go:276] 1 containers: [bcc51a2b7568]
	I0729 10:59:02.866156    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:59:02.876129    8358 logs.go:276] 0 containers: []
	W0729 10:59:02.876140    8358 logs.go:278] No container was found matching "kindnet"
	I0729 10:59:02.876194    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:59:02.886602    8358 logs.go:276] 1 containers: [b2779763fabc]
	I0729 10:59:02.886621    8358 logs.go:123] Gathering logs for dmesg ...
	I0729 10:59:02.886626    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:59:02.891913    8358 logs.go:123] Gathering logs for coredns [bc963fcc3a9f] ...
	I0729 10:59:02.891922    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc963fcc3a9f"
	I0729 10:59:02.903558    8358 logs.go:123] Gathering logs for kube-proxy [bbb2c4abdab6] ...
	I0729 10:59:02.903572    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbb2c4abdab6"
	I0729 10:59:02.915593    8358 logs.go:123] Gathering logs for storage-provisioner [b2779763fabc] ...
	I0729 10:59:02.915606    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2779763fabc"
	I0729 10:59:02.926985    8358 logs.go:123] Gathering logs for coredns [310fae3c4556] ...
	I0729 10:59:02.926998    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 310fae3c4556"
	I0729 10:59:02.938393    8358 logs.go:123] Gathering logs for coredns [d576d5e5186f] ...
	I0729 10:59:02.938405    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d576d5e5186f"
	I0729 10:59:02.950395    8358 logs.go:123] Gathering logs for Docker ...
	I0729 10:59:02.950405    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:59:02.975234    8358 logs.go:123] Gathering logs for kubelet ...
	I0729 10:59:02.975243    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:59:03.007528    8358 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:59:03.007534    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:59:03.042022    8358 logs.go:123] Gathering logs for kube-apiserver [2490def3c0ba] ...
	I0729 10:59:03.042035    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2490def3c0ba"
	I0729 10:59:03.057335    8358 logs.go:123] Gathering logs for etcd [468b83fd7685] ...
	I0729 10:59:03.057348    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 468b83fd7685"
	I0729 10:59:03.071186    8358 logs.go:123] Gathering logs for coredns [4206a0f3c3f5] ...
	I0729 10:59:03.071197    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4206a0f3c3f5"
	I0729 10:59:03.082587    8358 logs.go:123] Gathering logs for kube-scheduler [50ffad2915d6] ...
	I0729 10:59:03.082599    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50ffad2915d6"
	I0729 10:59:03.097308    8358 logs.go:123] Gathering logs for kube-controller-manager [bcc51a2b7568] ...
	I0729 10:59:03.097321    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc51a2b7568"
	I0729 10:59:03.114299    8358 logs.go:123] Gathering logs for container status ...
	I0729 10:59:03.114310    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:59:05.628487    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:59:10.630845    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:59:10.631321    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:59:10.670947    8358 logs.go:276] 1 containers: [2490def3c0ba]
	I0729 10:59:10.671080    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:59:10.691840    8358 logs.go:276] 1 containers: [468b83fd7685]
	I0729 10:59:10.691935    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:59:10.706238    8358 logs.go:276] 4 containers: [4206a0f3c3f5 bc963fcc3a9f 310fae3c4556 d576d5e5186f]
	I0729 10:59:10.706309    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:59:10.718542    8358 logs.go:276] 1 containers: [50ffad2915d6]
	I0729 10:59:10.718611    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:59:10.729250    8358 logs.go:276] 1 containers: [bbb2c4abdab6]
	I0729 10:59:10.729322    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:59:10.739867    8358 logs.go:276] 1 containers: [bcc51a2b7568]
	I0729 10:59:10.739933    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:59:10.750024    8358 logs.go:276] 0 containers: []
	W0729 10:59:10.750038    8358 logs.go:278] No container was found matching "kindnet"
	I0729 10:59:10.750094    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:59:10.761066    8358 logs.go:276] 1 containers: [b2779763fabc]
	I0729 10:59:10.761088    8358 logs.go:123] Gathering logs for etcd [468b83fd7685] ...
	I0729 10:59:10.761093    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 468b83fd7685"
	I0729 10:59:10.775733    8358 logs.go:123] Gathering logs for coredns [4206a0f3c3f5] ...
	I0729 10:59:10.775747    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4206a0f3c3f5"
	I0729 10:59:10.787864    8358 logs.go:123] Gathering logs for coredns [d576d5e5186f] ...
	I0729 10:59:10.787875    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d576d5e5186f"
	I0729 10:59:10.803414    8358 logs.go:123] Gathering logs for Docker ...
	I0729 10:59:10.803426    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:59:10.828835    8358 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:59:10.828842    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:59:10.868537    8358 logs.go:123] Gathering logs for storage-provisioner [b2779763fabc] ...
	I0729 10:59:10.868550    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2779763fabc"
	I0729 10:59:10.880529    8358 logs.go:123] Gathering logs for container status ...
	I0729 10:59:10.880540    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:59:10.893048    8358 logs.go:123] Gathering logs for dmesg ...
	I0729 10:59:10.893061    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:59:10.897344    8358 logs.go:123] Gathering logs for kube-scheduler [50ffad2915d6] ...
	I0729 10:59:10.897355    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50ffad2915d6"
	I0729 10:59:10.911488    8358 logs.go:123] Gathering logs for kube-proxy [bbb2c4abdab6] ...
	I0729 10:59:10.911500    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbb2c4abdab6"
	I0729 10:59:10.923519    8358 logs.go:123] Gathering logs for kube-controller-manager [bcc51a2b7568] ...
	I0729 10:59:10.923529    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc51a2b7568"
	I0729 10:59:10.941307    8358 logs.go:123] Gathering logs for kubelet ...
	I0729 10:59:10.941317    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:59:10.975775    8358 logs.go:123] Gathering logs for kube-apiserver [2490def3c0ba] ...
	I0729 10:59:10.975785    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2490def3c0ba"
	I0729 10:59:10.989981    8358 logs.go:123] Gathering logs for coredns [bc963fcc3a9f] ...
	I0729 10:59:10.989990    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc963fcc3a9f"
	I0729 10:59:11.001630    8358 logs.go:123] Gathering logs for coredns [310fae3c4556] ...
	I0729 10:59:11.001641    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 310fae3c4556"
	I0729 10:59:13.515568    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:59:18.518243    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:59:18.518329    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:59:18.529657    8358 logs.go:276] 1 containers: [2490def3c0ba]
	I0729 10:59:18.529726    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:59:18.540684    8358 logs.go:276] 1 containers: [468b83fd7685]
	I0729 10:59:18.540753    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:59:18.552342    8358 logs.go:276] 4 containers: [4206a0f3c3f5 bc963fcc3a9f 310fae3c4556 d576d5e5186f]
	I0729 10:59:18.552396    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:59:18.564570    8358 logs.go:276] 1 containers: [50ffad2915d6]
	I0729 10:59:18.564621    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:59:18.575571    8358 logs.go:276] 1 containers: [bbb2c4abdab6]
	I0729 10:59:18.575627    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:59:18.586410    8358 logs.go:276] 1 containers: [bcc51a2b7568]
	I0729 10:59:18.586471    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:59:18.597779    8358 logs.go:276] 0 containers: []
	W0729 10:59:18.597789    8358 logs.go:278] No container was found matching "kindnet"
	I0729 10:59:18.597841    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:59:18.612905    8358 logs.go:276] 1 containers: [b2779763fabc]
	I0729 10:59:18.612919    8358 logs.go:123] Gathering logs for container status ...
	I0729 10:59:18.612923    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:59:18.624312    8358 logs.go:123] Gathering logs for coredns [d576d5e5186f] ...
	I0729 10:59:18.624324    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d576d5e5186f"
	I0729 10:59:18.637044    8358 logs.go:123] Gathering logs for kube-proxy [bbb2c4abdab6] ...
	I0729 10:59:18.637054    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbb2c4abdab6"
	I0729 10:59:18.650616    8358 logs.go:123] Gathering logs for Docker ...
	I0729 10:59:18.650627    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:59:18.676219    8358 logs.go:123] Gathering logs for kube-controller-manager [bcc51a2b7568] ...
	I0729 10:59:18.676239    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc51a2b7568"
	I0729 10:59:18.699410    8358 logs.go:123] Gathering logs for storage-provisioner [b2779763fabc] ...
	I0729 10:59:18.699431    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2779763fabc"
	I0729 10:59:18.716126    8358 logs.go:123] Gathering logs for dmesg ...
	I0729 10:59:18.716137    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:59:18.722089    8358 logs.go:123] Gathering logs for kube-apiserver [2490def3c0ba] ...
	I0729 10:59:18.722099    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2490def3c0ba"
	I0729 10:59:18.737040    8358 logs.go:123] Gathering logs for kube-scheduler [50ffad2915d6] ...
	I0729 10:59:18.737049    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50ffad2915d6"
	I0729 10:59:18.752582    8358 logs.go:123] Gathering logs for coredns [bc963fcc3a9f] ...
	I0729 10:59:18.752593    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc963fcc3a9f"
	I0729 10:59:18.768155    8358 logs.go:123] Gathering logs for coredns [310fae3c4556] ...
	I0729 10:59:18.768167    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 310fae3c4556"
	I0729 10:59:18.780952    8358 logs.go:123] Gathering logs for kubelet ...
	I0729 10:59:18.780960    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:59:18.815744    8358 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:59:18.815765    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:59:18.857399    8358 logs.go:123] Gathering logs for coredns [4206a0f3c3f5] ...
	I0729 10:59:18.857411    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4206a0f3c3f5"
	I0729 10:59:18.871451    8358 logs.go:123] Gathering logs for etcd [468b83fd7685] ...
	I0729 10:59:18.871464    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 468b83fd7685"
	I0729 10:59:21.388754    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:59:26.391375    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:59:26.391873    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:59:26.432203    8358 logs.go:276] 1 containers: [2490def3c0ba]
	I0729 10:59:26.432339    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:59:26.454107    8358 logs.go:276] 1 containers: [468b83fd7685]
	I0729 10:59:26.454201    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:59:26.469795    8358 logs.go:276] 4 containers: [4206a0f3c3f5 bc963fcc3a9f 310fae3c4556 d576d5e5186f]
	I0729 10:59:26.469872    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:59:26.482319    8358 logs.go:276] 1 containers: [50ffad2915d6]
	I0729 10:59:26.482393    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:59:26.492835    8358 logs.go:276] 1 containers: [bbb2c4abdab6]
	I0729 10:59:26.492898    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:59:26.504218    8358 logs.go:276] 1 containers: [bcc51a2b7568]
	I0729 10:59:26.504287    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:59:26.514461    8358 logs.go:276] 0 containers: []
	W0729 10:59:26.514472    8358 logs.go:278] No container was found matching "kindnet"
	I0729 10:59:26.514528    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:59:26.525065    8358 logs.go:276] 1 containers: [b2779763fabc]
	I0729 10:59:26.525084    8358 logs.go:123] Gathering logs for etcd [468b83fd7685] ...
	I0729 10:59:26.525089    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 468b83fd7685"
	I0729 10:59:26.539317    8358 logs.go:123] Gathering logs for coredns [310fae3c4556] ...
	I0729 10:59:26.539329    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 310fae3c4556"
	I0729 10:59:26.551518    8358 logs.go:123] Gathering logs for Docker ...
	I0729 10:59:26.551531    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:59:26.577051    8358 logs.go:123] Gathering logs for coredns [bc963fcc3a9f] ...
	I0729 10:59:26.577061    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc963fcc3a9f"
	I0729 10:59:26.590636    8358 logs.go:123] Gathering logs for kube-scheduler [50ffad2915d6] ...
	I0729 10:59:26.590650    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50ffad2915d6"
	I0729 10:59:26.605461    8358 logs.go:123] Gathering logs for kube-controller-manager [bcc51a2b7568] ...
	I0729 10:59:26.605473    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc51a2b7568"
	I0729 10:59:26.626323    8358 logs.go:123] Gathering logs for dmesg ...
	I0729 10:59:26.626334    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:59:26.633875    8358 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:59:26.633885    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:59:26.668583    8358 logs.go:123] Gathering logs for coredns [4206a0f3c3f5] ...
	I0729 10:59:26.668592    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4206a0f3c3f5"
	I0729 10:59:26.680029    8358 logs.go:123] Gathering logs for kubelet ...
	I0729 10:59:26.680037    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:59:26.715385    8358 logs.go:123] Gathering logs for container status ...
	I0729 10:59:26.715393    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:59:26.728152    8358 logs.go:123] Gathering logs for storage-provisioner [b2779763fabc] ...
	I0729 10:59:26.728166    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2779763fabc"
	I0729 10:59:26.739875    8358 logs.go:123] Gathering logs for kube-apiserver [2490def3c0ba] ...
	I0729 10:59:26.739889    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2490def3c0ba"
	I0729 10:59:26.754183    8358 logs.go:123] Gathering logs for coredns [d576d5e5186f] ...
	I0729 10:59:26.754196    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d576d5e5186f"
	I0729 10:59:26.766466    8358 logs.go:123] Gathering logs for kube-proxy [bbb2c4abdab6] ...
	I0729 10:59:26.766479    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbb2c4abdab6"
	I0729 10:59:29.282016    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:59:34.284699    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:59:34.285121    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:59:34.336381    8358 logs.go:276] 1 containers: [2490def3c0ba]
	I0729 10:59:34.336506    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:59:34.355891    8358 logs.go:276] 1 containers: [468b83fd7685]
	I0729 10:59:34.356003    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:59:34.370803    8358 logs.go:276] 4 containers: [4206a0f3c3f5 bc963fcc3a9f 310fae3c4556 d576d5e5186f]
	I0729 10:59:34.370875    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:59:34.382875    8358 logs.go:276] 1 containers: [50ffad2915d6]
	I0729 10:59:34.382941    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:59:34.394057    8358 logs.go:276] 1 containers: [bbb2c4abdab6]
	I0729 10:59:34.394123    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:59:34.405018    8358 logs.go:276] 1 containers: [bcc51a2b7568]
	I0729 10:59:34.405091    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:59:34.419261    8358 logs.go:276] 0 containers: []
	W0729 10:59:34.419273    8358 logs.go:278] No container was found matching "kindnet"
	I0729 10:59:34.419332    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:59:34.434497    8358 logs.go:276] 1 containers: [b2779763fabc]
	I0729 10:59:34.434517    8358 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:59:34.434523    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:59:34.471087    8358 logs.go:123] Gathering logs for coredns [4206a0f3c3f5] ...
	I0729 10:59:34.471098    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4206a0f3c3f5"
	I0729 10:59:34.485154    8358 logs.go:123] Gathering logs for coredns [bc963fcc3a9f] ...
	I0729 10:59:34.485166    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc963fcc3a9f"
	I0729 10:59:34.496931    8358 logs.go:123] Gathering logs for dmesg ...
	I0729 10:59:34.496945    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:59:34.501289    8358 logs.go:123] Gathering logs for etcd [468b83fd7685] ...
	I0729 10:59:34.501297    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 468b83fd7685"
	I0729 10:59:34.515801    8358 logs.go:123] Gathering logs for kube-controller-manager [bcc51a2b7568] ...
	I0729 10:59:34.515814    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc51a2b7568"
	I0729 10:59:34.534137    8358 logs.go:123] Gathering logs for Docker ...
	I0729 10:59:34.534149    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:59:34.558649    8358 logs.go:123] Gathering logs for coredns [d576d5e5186f] ...
	I0729 10:59:34.558657    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d576d5e5186f"
	I0729 10:59:34.570489    8358 logs.go:123] Gathering logs for kube-proxy [bbb2c4abdab6] ...
	I0729 10:59:34.570498    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbb2c4abdab6"
	I0729 10:59:34.582838    8358 logs.go:123] Gathering logs for storage-provisioner [b2779763fabc] ...
	I0729 10:59:34.582849    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2779763fabc"
	I0729 10:59:34.595025    8358 logs.go:123] Gathering logs for container status ...
	I0729 10:59:34.595038    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:59:34.607054    8358 logs.go:123] Gathering logs for kubelet ...
	I0729 10:59:34.607066    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:59:34.642158    8358 logs.go:123] Gathering logs for kube-apiserver [2490def3c0ba] ...
	I0729 10:59:34.642168    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2490def3c0ba"
	I0729 10:59:34.662283    8358 logs.go:123] Gathering logs for coredns [310fae3c4556] ...
	I0729 10:59:34.662293    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 310fae3c4556"
	I0729 10:59:34.674150    8358 logs.go:123] Gathering logs for kube-scheduler [50ffad2915d6] ...
	I0729 10:59:34.674161    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50ffad2915d6"
	I0729 10:59:37.191060    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:59:42.193721    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:59:42.193856    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:59:42.214683    8358 logs.go:276] 1 containers: [2490def3c0ba]
	I0729 10:59:42.214771    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:59:42.229813    8358 logs.go:276] 1 containers: [468b83fd7685]
	I0729 10:59:42.229871    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:59:42.243714    8358 logs.go:276] 4 containers: [4206a0f3c3f5 bc963fcc3a9f 310fae3c4556 d576d5e5186f]
	I0729 10:59:42.243797    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:59:42.255581    8358 logs.go:276] 1 containers: [50ffad2915d6]
	I0729 10:59:42.255651    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:59:42.267814    8358 logs.go:276] 1 containers: [bbb2c4abdab6]
	I0729 10:59:42.267878    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:59:42.279531    8358 logs.go:276] 1 containers: [bcc51a2b7568]
	I0729 10:59:42.279621    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:59:42.291421    8358 logs.go:276] 0 containers: []
	W0729 10:59:42.291434    8358 logs.go:278] No container was found matching "kindnet"
	I0729 10:59:42.291490    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:59:42.309923    8358 logs.go:276] 1 containers: [b2779763fabc]
	I0729 10:59:42.309940    8358 logs.go:123] Gathering logs for kube-scheduler [50ffad2915d6] ...
	I0729 10:59:42.309946    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50ffad2915d6"
	I0729 10:59:42.327813    8358 logs.go:123] Gathering logs for kube-proxy [bbb2c4abdab6] ...
	I0729 10:59:42.327824    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbb2c4abdab6"
	I0729 10:59:42.341460    8358 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:59:42.341474    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:59:42.385945    8358 logs.go:123] Gathering logs for coredns [4206a0f3c3f5] ...
	I0729 10:59:42.385958    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4206a0f3c3f5"
	I0729 10:59:42.399135    8358 logs.go:123] Gathering logs for coredns [bc963fcc3a9f] ...
	I0729 10:59:42.399147    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc963fcc3a9f"
	I0729 10:59:42.414641    8358 logs.go:123] Gathering logs for coredns [d576d5e5186f] ...
	I0729 10:59:42.414653    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d576d5e5186f"
	I0729 10:59:42.427844    8358 logs.go:123] Gathering logs for storage-provisioner [b2779763fabc] ...
	I0729 10:59:42.427856    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2779763fabc"
	I0729 10:59:42.441183    8358 logs.go:123] Gathering logs for container status ...
	I0729 10:59:42.441194    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:59:42.453937    8358 logs.go:123] Gathering logs for dmesg ...
	I0729 10:59:42.453952    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:59:42.458762    8358 logs.go:123] Gathering logs for kube-apiserver [2490def3c0ba] ...
	I0729 10:59:42.458774    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2490def3c0ba"
	I0729 10:59:42.474401    8358 logs.go:123] Gathering logs for coredns [310fae3c4556] ...
	I0729 10:59:42.474413    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 310fae3c4556"
	I0729 10:59:42.488694    8358 logs.go:123] Gathering logs for etcd [468b83fd7685] ...
	I0729 10:59:42.488706    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 468b83fd7685"
	I0729 10:59:42.504354    8358 logs.go:123] Gathering logs for kube-controller-manager [bcc51a2b7568] ...
	I0729 10:59:42.504368    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc51a2b7568"
	I0729 10:59:42.528293    8358 logs.go:123] Gathering logs for kubelet ...
	I0729 10:59:42.528309    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:59:42.563751    8358 logs.go:123] Gathering logs for Docker ...
	I0729 10:59:42.563770    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:59:45.090794    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:59:50.091656    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:59:50.092024    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:59:50.126920    8358 logs.go:276] 1 containers: [2490def3c0ba]
	I0729 10:59:50.127053    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:59:50.150770    8358 logs.go:276] 1 containers: [468b83fd7685]
	I0729 10:59:50.150906    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:59:50.165670    8358 logs.go:276] 4 containers: [4206a0f3c3f5 bc963fcc3a9f 310fae3c4556 d576d5e5186f]
	I0729 10:59:50.165743    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:59:50.177742    8358 logs.go:276] 1 containers: [50ffad2915d6]
	I0729 10:59:50.177816    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:59:50.188133    8358 logs.go:276] 1 containers: [bbb2c4abdab6]
	I0729 10:59:50.188199    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:59:50.199359    8358 logs.go:276] 1 containers: [bcc51a2b7568]
	I0729 10:59:50.199428    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:59:50.209553    8358 logs.go:276] 0 containers: []
	W0729 10:59:50.209563    8358 logs.go:278] No container was found matching "kindnet"
	I0729 10:59:50.209618    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:59:50.220073    8358 logs.go:276] 1 containers: [b2779763fabc]
	I0729 10:59:50.220090    8358 logs.go:123] Gathering logs for kube-controller-manager [bcc51a2b7568] ...
	I0729 10:59:50.220097    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc51a2b7568"
	I0729 10:59:50.241122    8358 logs.go:123] Gathering logs for kube-proxy [bbb2c4abdab6] ...
	I0729 10:59:50.241134    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbb2c4abdab6"
	I0729 10:59:50.252891    8358 logs.go:123] Gathering logs for storage-provisioner [b2779763fabc] ...
	I0729 10:59:50.252904    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2779763fabc"
	I0729 10:59:50.264536    8358 logs.go:123] Gathering logs for dmesg ...
	I0729 10:59:50.264548    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:59:50.268765    8358 logs.go:123] Gathering logs for kube-apiserver [2490def3c0ba] ...
	I0729 10:59:50.268772    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2490def3c0ba"
	I0729 10:59:50.283637    8358 logs.go:123] Gathering logs for coredns [bc963fcc3a9f] ...
	I0729 10:59:50.283650    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc963fcc3a9f"
	I0729 10:59:50.295824    8358 logs.go:123] Gathering logs for coredns [310fae3c4556] ...
	I0729 10:59:50.295836    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 310fae3c4556"
	I0729 10:59:50.308565    8358 logs.go:123] Gathering logs for container status ...
	I0729 10:59:50.308577    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:59:50.319696    8358 logs.go:123] Gathering logs for etcd [468b83fd7685] ...
	I0729 10:59:50.319711    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 468b83fd7685"
	I0729 10:59:50.333856    8358 logs.go:123] Gathering logs for coredns [d576d5e5186f] ...
	I0729 10:59:50.333867    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d576d5e5186f"
	I0729 10:59:50.346901    8358 logs.go:123] Gathering logs for kube-scheduler [50ffad2915d6] ...
	I0729 10:59:50.346915    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50ffad2915d6"
	I0729 10:59:50.361801    8358 logs.go:123] Gathering logs for Docker ...
	I0729 10:59:50.361813    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:59:50.386217    8358 logs.go:123] Gathering logs for kubelet ...
	I0729 10:59:50.386227    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:59:50.419986    8358 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:59:50.419996    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:59:50.458974    8358 logs.go:123] Gathering logs for coredns [4206a0f3c3f5] ...
	I0729 10:59:50.458987    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4206a0f3c3f5"
	I0729 10:59:52.972752    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 10:59:57.975379    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 10:59:57.975639    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 10:59:58.003914    8358 logs.go:276] 1 containers: [2490def3c0ba]
	I0729 10:59:58.004042    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 10:59:58.022863    8358 logs.go:276] 1 containers: [468b83fd7685]
	I0729 10:59:58.022949    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 10:59:58.037963    8358 logs.go:276] 4 containers: [4206a0f3c3f5 bc963fcc3a9f 310fae3c4556 d576d5e5186f]
	I0729 10:59:58.038039    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 10:59:58.053592    8358 logs.go:276] 1 containers: [50ffad2915d6]
	I0729 10:59:58.053660    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 10:59:58.064543    8358 logs.go:276] 1 containers: [bbb2c4abdab6]
	I0729 10:59:58.064602    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 10:59:58.078006    8358 logs.go:276] 1 containers: [bcc51a2b7568]
	I0729 10:59:58.078067    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 10:59:58.088137    8358 logs.go:276] 0 containers: []
	W0729 10:59:58.088150    8358 logs.go:278] No container was found matching "kindnet"
	I0729 10:59:58.088197    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 10:59:58.098516    8358 logs.go:276] 1 containers: [b2779763fabc]
	I0729 10:59:58.098535    8358 logs.go:123] Gathering logs for kube-scheduler [50ffad2915d6] ...
	I0729 10:59:58.098540    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50ffad2915d6"
	I0729 10:59:58.113462    8358 logs.go:123] Gathering logs for kube-proxy [bbb2c4abdab6] ...
	I0729 10:59:58.113473    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbb2c4abdab6"
	I0729 10:59:58.125158    8358 logs.go:123] Gathering logs for kube-controller-manager [bcc51a2b7568] ...
	I0729 10:59:58.125169    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc51a2b7568"
	I0729 10:59:58.142318    8358 logs.go:123] Gathering logs for container status ...
	I0729 10:59:58.142329    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:59:58.154564    8358 logs.go:123] Gathering logs for dmesg ...
	I0729 10:59:58.154577    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:59:58.159037    8358 logs.go:123] Gathering logs for kube-apiserver [2490def3c0ba] ...
	I0729 10:59:58.159046    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2490def3c0ba"
	I0729 10:59:58.173481    8358 logs.go:123] Gathering logs for coredns [4206a0f3c3f5] ...
	I0729 10:59:58.173489    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4206a0f3c3f5"
	I0729 10:59:58.185089    8358 logs.go:123] Gathering logs for storage-provisioner [b2779763fabc] ...
	I0729 10:59:58.185101    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2779763fabc"
	I0729 10:59:58.197338    8358 logs.go:123] Gathering logs for kubelet ...
	I0729 10:59:58.197347    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 10:59:58.231386    8358 logs.go:123] Gathering logs for coredns [310fae3c4556] ...
	I0729 10:59:58.231394    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 310fae3c4556"
	I0729 10:59:58.246764    8358 logs.go:123] Gathering logs for Docker ...
	I0729 10:59:58.246777    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 10:59:58.270793    8358 logs.go:123] Gathering logs for coredns [d576d5e5186f] ...
	I0729 10:59:58.270803    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d576d5e5186f"
	I0729 10:59:58.283240    8358 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:59:58.283250    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:59:58.319661    8358 logs.go:123] Gathering logs for etcd [468b83fd7685] ...
	I0729 10:59:58.319672    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 468b83fd7685"
	I0729 10:59:58.333816    8358 logs.go:123] Gathering logs for coredns [bc963fcc3a9f] ...
	I0729 10:59:58.333827    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc963fcc3a9f"
	I0729 11:00:00.847994    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 11:00:05.850445    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 11:00:05.850916    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 11:00:05.896401    8358 logs.go:276] 1 containers: [2490def3c0ba]
	I0729 11:00:05.896528    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 11:00:05.916539    8358 logs.go:276] 1 containers: [468b83fd7685]
	I0729 11:00:05.916625    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 11:00:05.939751    8358 logs.go:276] 4 containers: [4206a0f3c3f5 bc963fcc3a9f 310fae3c4556 d576d5e5186f]
	I0729 11:00:05.939829    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 11:00:05.950810    8358 logs.go:276] 1 containers: [50ffad2915d6]
	I0729 11:00:05.950881    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 11:00:05.962873    8358 logs.go:276] 1 containers: [bbb2c4abdab6]
	I0729 11:00:05.962934    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 11:00:05.974182    8358 logs.go:276] 1 containers: [bcc51a2b7568]
	I0729 11:00:05.974250    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 11:00:05.984589    8358 logs.go:276] 0 containers: []
	W0729 11:00:05.984600    8358 logs.go:278] No container was found matching "kindnet"
	I0729 11:00:05.984648    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 11:00:05.994981    8358 logs.go:276] 1 containers: [b2779763fabc]
	I0729 11:00:05.994997    8358 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:00:05.995002    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 11:00:06.052820    8358 logs.go:123] Gathering logs for kube-apiserver [2490def3c0ba] ...
	I0729 11:00:06.052832    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2490def3c0ba"
	I0729 11:00:06.067644    8358 logs.go:123] Gathering logs for coredns [4206a0f3c3f5] ...
	I0729 11:00:06.067657    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4206a0f3c3f5"
	I0729 11:00:06.079594    8358 logs.go:123] Gathering logs for Docker ...
	I0729 11:00:06.079607    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 11:00:06.104668    8358 logs.go:123] Gathering logs for container status ...
	I0729 11:00:06.104674    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:00:06.118179    8358 logs.go:123] Gathering logs for coredns [bc963fcc3a9f] ...
	I0729 11:00:06.118191    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc963fcc3a9f"
	I0729 11:00:06.130534    8358 logs.go:123] Gathering logs for coredns [310fae3c4556] ...
	I0729 11:00:06.130545    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 310fae3c4556"
	I0729 11:00:06.142353    8358 logs.go:123] Gathering logs for kube-scheduler [50ffad2915d6] ...
	I0729 11:00:06.142365    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50ffad2915d6"
	I0729 11:00:06.157663    8358 logs.go:123] Gathering logs for kubelet ...
	I0729 11:00:06.157673    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:00:06.190021    8358 logs.go:123] Gathering logs for etcd [468b83fd7685] ...
	I0729 11:00:06.190028    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 468b83fd7685"
	I0729 11:00:06.203893    8358 logs.go:123] Gathering logs for kube-controller-manager [bcc51a2b7568] ...
	I0729 11:00:06.203902    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc51a2b7568"
	I0729 11:00:06.221232    8358 logs.go:123] Gathering logs for storage-provisioner [b2779763fabc] ...
	I0729 11:00:06.221244    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2779763fabc"
	I0729 11:00:06.233245    8358 logs.go:123] Gathering logs for dmesg ...
	I0729 11:00:06.233256    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:00:06.237432    8358 logs.go:123] Gathering logs for coredns [d576d5e5186f] ...
	I0729 11:00:06.237438    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d576d5e5186f"
	I0729 11:00:06.250008    8358 logs.go:123] Gathering logs for kube-proxy [bbb2c4abdab6] ...
	I0729 11:00:06.250021    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbb2c4abdab6"
	I0729 11:00:08.764195    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 11:00:13.766424    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 11:00:13.766767    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 11:00:13.797715    8358 logs.go:276] 1 containers: [2490def3c0ba]
	I0729 11:00:13.797832    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 11:00:13.815892    8358 logs.go:276] 1 containers: [468b83fd7685]
	I0729 11:00:13.815977    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 11:00:13.829646    8358 logs.go:276] 4 containers: [4206a0f3c3f5 bc963fcc3a9f 310fae3c4556 d576d5e5186f]
	I0729 11:00:13.829722    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 11:00:13.840550    8358 logs.go:276] 1 containers: [50ffad2915d6]
	I0729 11:00:13.840616    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 11:00:13.853530    8358 logs.go:276] 1 containers: [bbb2c4abdab6]
	I0729 11:00:13.853599    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 11:00:13.869688    8358 logs.go:276] 1 containers: [bcc51a2b7568]
	I0729 11:00:13.869757    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 11:00:13.879752    8358 logs.go:276] 0 containers: []
	W0729 11:00:13.879766    8358 logs.go:278] No container was found matching "kindnet"
	I0729 11:00:13.879824    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 11:00:13.890032    8358 logs.go:276] 1 containers: [b2779763fabc]
	I0729 11:00:13.890055    8358 logs.go:123] Gathering logs for coredns [310fae3c4556] ...
	I0729 11:00:13.890061    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 310fae3c4556"
	I0729 11:00:13.901230    8358 logs.go:123] Gathering logs for kube-scheduler [50ffad2915d6] ...
	I0729 11:00:13.901240    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50ffad2915d6"
	I0729 11:00:13.917236    8358 logs.go:123] Gathering logs for kube-proxy [bbb2c4abdab6] ...
	I0729 11:00:13.917246    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbb2c4abdab6"
	I0729 11:00:13.928973    8358 logs.go:123] Gathering logs for kube-controller-manager [bcc51a2b7568] ...
	I0729 11:00:13.928983    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc51a2b7568"
	I0729 11:00:13.947174    8358 logs.go:123] Gathering logs for container status ...
	I0729 11:00:13.947184    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:00:13.961730    8358 logs.go:123] Gathering logs for kubelet ...
	I0729 11:00:13.961744    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:00:13.995029    8358 logs.go:123] Gathering logs for dmesg ...
	I0729 11:00:13.995040    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:00:13.999226    8358 logs.go:123] Gathering logs for coredns [4206a0f3c3f5] ...
	I0729 11:00:13.999232    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4206a0f3c3f5"
	I0729 11:00:14.010669    8358 logs.go:123] Gathering logs for storage-provisioner [b2779763fabc] ...
	I0729 11:00:14.010679    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2779763fabc"
	I0729 11:00:14.022451    8358 logs.go:123] Gathering logs for Docker ...
	I0729 11:00:14.022463    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 11:00:14.046643    8358 logs.go:123] Gathering logs for kube-apiserver [2490def3c0ba] ...
	I0729 11:00:14.046650    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2490def3c0ba"
	I0729 11:00:14.060385    8358 logs.go:123] Gathering logs for etcd [468b83fd7685] ...
	I0729 11:00:14.060394    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 468b83fd7685"
	I0729 11:00:14.073956    8358 logs.go:123] Gathering logs for coredns [bc963fcc3a9f] ...
	I0729 11:00:14.073993    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc963fcc3a9f"
	I0729 11:00:14.085042    8358 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:00:14.085056    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 11:00:14.127103    8358 logs.go:123] Gathering logs for coredns [d576d5e5186f] ...
	I0729 11:00:14.127114    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d576d5e5186f"
	I0729 11:00:16.640945    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 11:00:21.643602    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 11:00:21.644038    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 11:00:21.678738    8358 logs.go:276] 1 containers: [2490def3c0ba]
	I0729 11:00:21.678861    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 11:00:21.699301    8358 logs.go:276] 1 containers: [468b83fd7685]
	I0729 11:00:21.699405    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 11:00:21.715227    8358 logs.go:276] 4 containers: [4206a0f3c3f5 bc963fcc3a9f 310fae3c4556 d576d5e5186f]
	I0729 11:00:21.715297    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 11:00:21.727587    8358 logs.go:276] 1 containers: [50ffad2915d6]
	I0729 11:00:21.727649    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 11:00:21.738744    8358 logs.go:276] 1 containers: [bbb2c4abdab6]
	I0729 11:00:21.738804    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 11:00:21.750142    8358 logs.go:276] 1 containers: [bcc51a2b7568]
	I0729 11:00:21.750198    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 11:00:21.760332    8358 logs.go:276] 0 containers: []
	W0729 11:00:21.760343    8358 logs.go:278] No container was found matching "kindnet"
	I0729 11:00:21.760404    8358 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 11:00:21.771714    8358 logs.go:276] 1 containers: [b2779763fabc]
	I0729 11:00:21.771732    8358 logs.go:123] Gathering logs for Docker ...
	I0729 11:00:21.771738    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 11:00:21.795096    8358 logs.go:123] Gathering logs for container status ...
	I0729 11:00:21.795107    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:00:21.807308    8358 logs.go:123] Gathering logs for kubelet ...
	I0729 11:00:21.807321    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 11:00:21.840236    8358 logs.go:123] Gathering logs for storage-provisioner [b2779763fabc] ...
	I0729 11:00:21.840243    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2779763fabc"
	I0729 11:00:21.851979    8358 logs.go:123] Gathering logs for coredns [310fae3c4556] ...
	I0729 11:00:21.851993    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 310fae3c4556"
	I0729 11:00:21.863708    8358 logs.go:123] Gathering logs for etcd [468b83fd7685] ...
	I0729 11:00:21.863719    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 468b83fd7685"
	I0729 11:00:21.877698    8358 logs.go:123] Gathering logs for coredns [4206a0f3c3f5] ...
	I0729 11:00:21.877708    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4206a0f3c3f5"
	I0729 11:00:21.889239    8358 logs.go:123] Gathering logs for coredns [bc963fcc3a9f] ...
	I0729 11:00:21.889249    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc963fcc3a9f"
	I0729 11:00:21.900919    8358 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:00:21.900932    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 11:00:21.934685    8358 logs.go:123] Gathering logs for kube-apiserver [2490def3c0ba] ...
	I0729 11:00:21.934697    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2490def3c0ba"
	I0729 11:00:21.951977    8358 logs.go:123] Gathering logs for kube-scheduler [50ffad2915d6] ...
	I0729 11:00:21.951988    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50ffad2915d6"
	I0729 11:00:21.968457    8358 logs.go:123] Gathering logs for kube-proxy [bbb2c4abdab6] ...
	I0729 11:00:21.968470    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbb2c4abdab6"
	I0729 11:00:21.980208    8358 logs.go:123] Gathering logs for kube-controller-manager [bcc51a2b7568] ...
	I0729 11:00:21.980221    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc51a2b7568"
	I0729 11:00:21.997776    8358 logs.go:123] Gathering logs for dmesg ...
	I0729 11:00:21.997786    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:00:22.001789    8358 logs.go:123] Gathering logs for coredns [d576d5e5186f] ...
	I0729 11:00:22.001798    8358 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d576d5e5186f"
	I0729 11:00:24.516707    8358 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 11:00:29.519428    8358 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 11:00:29.527575    8358 out.go:177] 
	W0729 11:00:29.532613    8358 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0729 11:00:29.532642    8358 out.go:239] * 
	* 
	W0729 11:00:29.535041    8358 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 11:00:29.544574    8358 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-294000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (575.95s)

                                                
                                    
x
+
TestPause/serial/Start (9.91s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-278000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-278000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.844964583s)

                                                
                                                
-- stdout --
	* [pause-278000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19339
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-278000" primary control-plane node in "pause-278000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-278000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-278000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-278000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-278000 -n pause-278000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-278000 -n pause-278000: exit status 7 (65.458875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-278000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (10.02s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-558000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-558000 --driver=qemu2 : exit status 80 (9.97170425s)

                                                
                                                
-- stdout --
	* [NoKubernetes-558000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19339
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-558000" primary control-plane node in "NoKubernetes-558000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-558000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-558000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-558000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-558000 -n NoKubernetes-558000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-558000 -n NoKubernetes-558000: exit status 7 (50.142916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-558000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (10.02s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-558000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-558000 --no-kubernetes --driver=qemu2 : exit status 80 (5.290599s)

                                                
                                                
-- stdout --
	* [NoKubernetes-558000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19339
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-558000
	* Restarting existing qemu2 VM for "NoKubernetes-558000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-558000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-558000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-558000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-558000 -n NoKubernetes-558000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-558000 -n NoKubernetes-558000: exit status 7 (56.300333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-558000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-558000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-558000 --no-kubernetes --driver=qemu2 : exit status 80 (5.252827s)

                                                
                                                
-- stdout --
	* [NoKubernetes-558000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19339
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-558000
	* Restarting existing qemu2 VM for "NoKubernetes-558000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-558000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-558000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-558000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-558000 -n NoKubernetes-558000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-558000 -n NoKubernetes-558000: exit status 7 (59.956125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-558000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-558000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-558000 --driver=qemu2 : exit status 80 (5.278513375s)

                                                
                                                
-- stdout --
	* [NoKubernetes-558000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19339
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-558000
	* Restarting existing qemu2 VM for "NoKubernetes-558000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-558000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-558000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-558000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-558000 -n NoKubernetes-558000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-558000 -n NoKubernetes-558000: exit status 7 (36.87425ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-558000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-281000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-281000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.831448292s)

                                                
                                                
-- stdout --
	* [auto-281000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19339
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-281000" primary control-plane node in "auto-281000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-281000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:58:32.038378    8555 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:58:32.038508    8555 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:58:32.038511    8555 out.go:304] Setting ErrFile to fd 2...
	I0729 10:58:32.038514    8555 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:58:32.038647    8555 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 10:58:32.039691    8555 out.go:298] Setting JSON to false
	I0729 10:58:32.055933    8555 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5281,"bootTime":1722270631,"procs":452,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 10:58:32.056005    8555 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 10:58:32.061432    8555 out.go:177] * [auto-281000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 10:58:32.069273    8555 out.go:177]   - MINIKUBE_LOCATION=19339
	I0729 10:58:32.069303    8555 notify.go:220] Checking for updates...
	I0729 10:58:32.076223    8555 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	I0729 10:58:32.079287    8555 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 10:58:32.083193    8555 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:58:32.086223    8555 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	I0729 10:58:32.089202    8555 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 10:58:32.092494    8555 config.go:182] Loaded profile config "multinode-263000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:58:32.092557    8555 config.go:182] Loaded profile config "stopped-upgrade-294000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 10:58:32.092605    8555 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:58:32.097208    8555 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 10:58:32.104194    8555 start.go:297] selected driver: qemu2
	I0729 10:58:32.104201    8555 start.go:901] validating driver "qemu2" against <nil>
	I0729 10:58:32.104207    8555 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:58:32.106407    8555 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 10:58:32.110230    8555 out.go:177] * Automatically selected the socket_vmnet network
	I0729 10:58:32.113273    8555 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 10:58:32.113286    8555 cni.go:84] Creating CNI manager for ""
	I0729 10:58:32.113293    8555 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 10:58:32.113296    8555 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 10:58:32.113317    8555 start.go:340] cluster config:
	{Name:auto-281000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:auto-281000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:58:32.116843    8555 iso.go:125] acquiring lock: {Name:mk2808e0b9510c77af2c0862d3450f3cc996acba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:58:32.125272    8555 out.go:177] * Starting "auto-281000" primary control-plane node in "auto-281000" cluster
	I0729 10:58:32.129060    8555 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 10:58:32.129074    8555 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 10:58:32.129084    8555 cache.go:56] Caching tarball of preloaded images
	I0729 10:58:32.129173    8555 preload.go:172] Found /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 10:58:32.129190    8555 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 10:58:32.129241    8555 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/auto-281000/config.json ...
	I0729 10:58:32.129257    8555 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/auto-281000/config.json: {Name:mka5a47604cde341f823b999b3720a02aafe78db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:58:32.130000    8555 start.go:360] acquireMachinesLock for auto-281000: {Name:mk9e50f97d2386769ca02400541d9347efa17b63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:58:32.130033    8555 start.go:364] duration metric: took 27.042µs to acquireMachinesLock for "auto-281000"
	I0729 10:58:32.130043    8555 start.go:93] Provisioning new machine with config: &{Name:auto-281000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:auto-281000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:58:32.130083    8555 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 10:58:32.135272    8555 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 10:58:32.152048    8555 start.go:159] libmachine.API.Create for "auto-281000" (driver="qemu2")
	I0729 10:58:32.152080    8555 client.go:168] LocalClient.Create starting
	I0729 10:58:32.152148    8555 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/ca.pem
	I0729 10:58:32.152183    8555 main.go:141] libmachine: Decoding PEM data...
	I0729 10:58:32.152192    8555 main.go:141] libmachine: Parsing certificate...
	I0729 10:58:32.152236    8555 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/cert.pem
	I0729 10:58:32.152259    8555 main.go:141] libmachine: Decoding PEM data...
	I0729 10:58:32.152270    8555 main.go:141] libmachine: Parsing certificate...
	I0729 10:58:32.152679    8555 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19339-6071/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0729 10:58:32.302530    8555 main.go:141] libmachine: Creating SSH key...
	I0729 10:58:32.399158    8555 main.go:141] libmachine: Creating Disk image...
	I0729 10:58:32.399165    8555 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 10:58:32.399379    8555 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/auto-281000/disk.qcow2.raw /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/auto-281000/disk.qcow2
	I0729 10:58:32.408518    8555 main.go:141] libmachine: STDOUT: 
	I0729 10:58:32.408534    8555 main.go:141] libmachine: STDERR: 
	I0729 10:58:32.408575    8555 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/auto-281000/disk.qcow2 +20000M
	I0729 10:58:32.416324    8555 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 10:58:32.416341    8555 main.go:141] libmachine: STDERR: 
	I0729 10:58:32.416352    8555 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/auto-281000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/auto-281000/disk.qcow2
	I0729 10:58:32.416357    8555 main.go:141] libmachine: Starting QEMU VM...
	I0729 10:58:32.416371    8555 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:58:32.416407    8555 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/auto-281000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/auto-281000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/auto-281000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:64:4d:78:9e:50 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/auto-281000/disk.qcow2
	I0729 10:58:32.418117    8555 main.go:141] libmachine: STDOUT: 
	I0729 10:58:32.418132    8555 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:58:32.418152    8555 client.go:171] duration metric: took 266.071708ms to LocalClient.Create
	I0729 10:58:34.420382    8555 start.go:128] duration metric: took 2.290308542s to createHost
	I0729 10:58:34.420467    8555 start.go:83] releasing machines lock for "auto-281000", held for 2.290464334s
	W0729 10:58:34.420524    8555 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:58:34.434480    8555 out.go:177] * Deleting "auto-281000" in qemu2 ...
	W0729 10:58:34.459966    8555 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:58:34.459999    8555 start.go:729] Will try again in 5 seconds ...
	I0729 10:58:39.461974    8555 start.go:360] acquireMachinesLock for auto-281000: {Name:mk9e50f97d2386769ca02400541d9347efa17b63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:58:39.462091    8555 start.go:364] duration metric: took 98µs to acquireMachinesLock for "auto-281000"
	I0729 10:58:39.462109    8555 start.go:93] Provisioning new machine with config: &{Name:auto-281000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:auto-281000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:58:39.462165    8555 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 10:58:39.469347    8555 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 10:58:39.484453    8555 start.go:159] libmachine.API.Create for "auto-281000" (driver="qemu2")
	I0729 10:58:39.484477    8555 client.go:168] LocalClient.Create starting
	I0729 10:58:39.484544    8555 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/ca.pem
	I0729 10:58:39.484581    8555 main.go:141] libmachine: Decoding PEM data...
	I0729 10:58:39.484590    8555 main.go:141] libmachine: Parsing certificate...
	I0729 10:58:39.484624    8555 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/cert.pem
	I0729 10:58:39.484647    8555 main.go:141] libmachine: Decoding PEM data...
	I0729 10:58:39.484662    8555 main.go:141] libmachine: Parsing certificate...
	I0729 10:58:39.484927    8555 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19339-6071/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0729 10:58:39.632801    8555 main.go:141] libmachine: Creating SSH key...
	I0729 10:58:39.776291    8555 main.go:141] libmachine: Creating Disk image...
	I0729 10:58:39.776303    8555 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 10:58:39.776514    8555 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/auto-281000/disk.qcow2.raw /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/auto-281000/disk.qcow2
	I0729 10:58:39.786444    8555 main.go:141] libmachine: STDOUT: 
	I0729 10:58:39.786463    8555 main.go:141] libmachine: STDERR: 
	I0729 10:58:39.786514    8555 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/auto-281000/disk.qcow2 +20000M
	I0729 10:58:39.794379    8555 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 10:58:39.794393    8555 main.go:141] libmachine: STDERR: 
	I0729 10:58:39.794404    8555 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/auto-281000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/auto-281000/disk.qcow2
	I0729 10:58:39.794409    8555 main.go:141] libmachine: Starting QEMU VM...
	I0729 10:58:39.794421    8555 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:58:39.794444    8555 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/auto-281000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/auto-281000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/auto-281000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:c4:22:58:6f:d9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/auto-281000/disk.qcow2
	I0729 10:58:39.796112    8555 main.go:141] libmachine: STDOUT: 
	I0729 10:58:39.796127    8555 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:58:39.796139    8555 client.go:171] duration metric: took 311.663875ms to LocalClient.Create
	I0729 10:58:41.798306    8555 start.go:128] duration metric: took 2.336137042s to createHost
	I0729 10:58:41.798373    8555 start.go:83] releasing machines lock for "auto-281000", held for 2.336309834s
	W0729 10:58:41.798673    8555 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-281000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-281000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:58:41.810623    8555 out.go:177] 
	W0729 10:58:41.813721    8555 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 10:58:41.813738    8555 out.go:239] * 
	* 
	W0729 10:58:41.815696    8555 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 10:58:41.827599    8555 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-281000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-281000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.938676s)

                                                
                                                
-- stdout --
	* [kindnet-281000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19339
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-281000" primary control-plane node in "kindnet-281000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-281000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:58:44.059922    8664 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:58:44.060032    8664 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:58:44.060037    8664 out.go:304] Setting ErrFile to fd 2...
	I0729 10:58:44.060039    8664 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:58:44.060176    8664 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 10:58:44.061264    8664 out.go:298] Setting JSON to false
	I0729 10:58:44.077620    8664 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5293,"bootTime":1722270631,"procs":452,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 10:58:44.077691    8664 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 10:58:44.084200    8664 out.go:177] * [kindnet-281000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 10:58:44.092401    8664 out.go:177]   - MINIKUBE_LOCATION=19339
	I0729 10:58:44.092448    8664 notify.go:220] Checking for updates...
	I0729 10:58:44.100356    8664 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	I0729 10:58:44.103371    8664 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 10:58:44.107311    8664 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:58:44.110321    8664 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	I0729 10:58:44.113383    8664 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 10:58:44.116626    8664 config.go:182] Loaded profile config "multinode-263000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:58:44.116698    8664 config.go:182] Loaded profile config "stopped-upgrade-294000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 10:58:44.116754    8664 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:58:44.121380    8664 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 10:58:44.128332    8664 start.go:297] selected driver: qemu2
	I0729 10:58:44.128340    8664 start.go:901] validating driver "qemu2" against <nil>
	I0729 10:58:44.128345    8664 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:58:44.130601    8664 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 10:58:44.134327    8664 out.go:177] * Automatically selected the socket_vmnet network
	I0729 10:58:44.137505    8664 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 10:58:44.137531    8664 cni.go:84] Creating CNI manager for "kindnet"
	I0729 10:58:44.137540    8664 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0729 10:58:44.137568    8664 start.go:340] cluster config:
	{Name:kindnet-281000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:kindnet-281000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:58:44.141104    8664 iso.go:125] acquiring lock: {Name:mk2808e0b9510c77af2c0862d3450f3cc996acba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:58:44.149356    8664 out.go:177] * Starting "kindnet-281000" primary control-plane node in "kindnet-281000" cluster
	I0729 10:58:44.152290    8664 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 10:58:44.152303    8664 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 10:58:44.152313    8664 cache.go:56] Caching tarball of preloaded images
	I0729 10:58:44.152361    8664 preload.go:172] Found /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 10:58:44.152365    8664 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 10:58:44.152427    8664 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/kindnet-281000/config.json ...
	I0729 10:58:44.152437    8664 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/kindnet-281000/config.json: {Name:mk77d7d7a85ef5d80014e6b0374075ba5c750563 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:58:44.152841    8664 start.go:360] acquireMachinesLock for kindnet-281000: {Name:mk9e50f97d2386769ca02400541d9347efa17b63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:58:44.152877    8664 start.go:364] duration metric: took 29.75µs to acquireMachinesLock for "kindnet-281000"
	I0729 10:58:44.152888    8664 start.go:93] Provisioning new machine with config: &{Name:kindnet-281000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kindnet-281000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:58:44.152919    8664 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 10:58:44.160209    8664 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 10:58:44.175430    8664 start.go:159] libmachine.API.Create for "kindnet-281000" (driver="qemu2")
	I0729 10:58:44.175448    8664 client.go:168] LocalClient.Create starting
	I0729 10:58:44.175510    8664 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/ca.pem
	I0729 10:58:44.175543    8664 main.go:141] libmachine: Decoding PEM data...
	I0729 10:58:44.175551    8664 main.go:141] libmachine: Parsing certificate...
	I0729 10:58:44.175586    8664 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/cert.pem
	I0729 10:58:44.175608    8664 main.go:141] libmachine: Decoding PEM data...
	I0729 10:58:44.175616    8664 main.go:141] libmachine: Parsing certificate...
	I0729 10:58:44.175994    8664 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19339-6071/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0729 10:58:44.325956    8664 main.go:141] libmachine: Creating SSH key...
	I0729 10:58:44.465716    8664 main.go:141] libmachine: Creating Disk image...
	I0729 10:58:44.465727    8664 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 10:58:44.465930    8664 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/kindnet-281000/disk.qcow2.raw /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/kindnet-281000/disk.qcow2
	I0729 10:58:44.475360    8664 main.go:141] libmachine: STDOUT: 
	I0729 10:58:44.475377    8664 main.go:141] libmachine: STDERR: 
	I0729 10:58:44.475420    8664 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/kindnet-281000/disk.qcow2 +20000M
	I0729 10:58:44.483397    8664 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 10:58:44.483415    8664 main.go:141] libmachine: STDERR: 
	I0729 10:58:44.483429    8664 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/kindnet-281000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/kindnet-281000/disk.qcow2
	I0729 10:58:44.483435    8664 main.go:141] libmachine: Starting QEMU VM...
	I0729 10:58:44.483447    8664 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:58:44.483477    8664 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/kindnet-281000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/kindnet-281000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/kindnet-281000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:b8:c6:e0:ed:a5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/kindnet-281000/disk.qcow2
	I0729 10:58:44.485154    8664 main.go:141] libmachine: STDOUT: 
	I0729 10:58:44.485170    8664 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:58:44.485187    8664 client.go:171] duration metric: took 309.741375ms to LocalClient.Create
	I0729 10:58:46.487319    8664 start.go:128] duration metric: took 2.3344235s to createHost
	I0729 10:58:46.487342    8664 start.go:83] releasing machines lock for "kindnet-281000", held for 2.334499584s
	W0729 10:58:46.487368    8664 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:58:46.501373    8664 out.go:177] * Deleting "kindnet-281000" in qemu2 ...
	W0729 10:58:46.524226    8664 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:58:46.524239    8664 start.go:729] Will try again in 5 seconds ...
	I0729 10:58:51.526473    8664 start.go:360] acquireMachinesLock for kindnet-281000: {Name:mk9e50f97d2386769ca02400541d9347efa17b63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:58:51.526981    8664 start.go:364] duration metric: took 413.834µs to acquireMachinesLock for "kindnet-281000"
	I0729 10:58:51.527067    8664 start.go:93] Provisioning new machine with config: &{Name:kindnet-281000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kindnet-281000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:58:51.527322    8664 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 10:58:51.532058    8664 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 10:58:51.579893    8664 start.go:159] libmachine.API.Create for "kindnet-281000" (driver="qemu2")
	I0729 10:58:51.579947    8664 client.go:168] LocalClient.Create starting
	I0729 10:58:51.580079    8664 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/ca.pem
	I0729 10:58:51.580149    8664 main.go:141] libmachine: Decoding PEM data...
	I0729 10:58:51.580164    8664 main.go:141] libmachine: Parsing certificate...
	I0729 10:58:51.580252    8664 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/cert.pem
	I0729 10:58:51.580298    8664 main.go:141] libmachine: Decoding PEM data...
	I0729 10:58:51.580314    8664 main.go:141] libmachine: Parsing certificate...
	I0729 10:58:51.580956    8664 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19339-6071/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0729 10:58:51.738176    8664 main.go:141] libmachine: Creating SSH key...
	I0729 10:58:51.907118    8664 main.go:141] libmachine: Creating Disk image...
	I0729 10:58:51.907125    8664 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 10:58:51.907350    8664 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/kindnet-281000/disk.qcow2.raw /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/kindnet-281000/disk.qcow2
	I0729 10:58:51.916862    8664 main.go:141] libmachine: STDOUT: 
	I0729 10:58:51.916879    8664 main.go:141] libmachine: STDERR: 
	I0729 10:58:51.916928    8664 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/kindnet-281000/disk.qcow2 +20000M
	I0729 10:58:51.924921    8664 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 10:58:51.924935    8664 main.go:141] libmachine: STDERR: 
	I0729 10:58:51.924945    8664 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/kindnet-281000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/kindnet-281000/disk.qcow2
	I0729 10:58:51.924950    8664 main.go:141] libmachine: Starting QEMU VM...
	I0729 10:58:51.924961    8664 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:58:51.925009    8664 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/kindnet-281000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/kindnet-281000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/kindnet-281000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:9e:04:3c:de:eb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/kindnet-281000/disk.qcow2
	I0729 10:58:51.926760    8664 main.go:141] libmachine: STDOUT: 
	I0729 10:58:51.926774    8664 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:58:51.926788    8664 client.go:171] duration metric: took 346.841ms to LocalClient.Create
	I0729 10:58:53.928972    8664 start.go:128] duration metric: took 2.401621958s to createHost
	I0729 10:58:53.929055    8664 start.go:83] releasing machines lock for "kindnet-281000", held for 2.402088625s
	W0729 10:58:53.929453    8664 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-281000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-281000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:58:53.941056    8664 out.go:177] 
	W0729 10:58:53.945201    8664 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 10:58:53.945227    8664 out.go:239] * 
	* 
	W0729 10:58:53.947685    8664 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 10:58:53.957007    8664 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-281000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-281000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.726174958s)

                                                
                                                
-- stdout --
	* [calico-281000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19339
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-281000" primary control-plane node in "calico-281000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-281000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:58:56.307853    8777 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:58:56.308002    8777 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:58:56.308006    8777 out.go:304] Setting ErrFile to fd 2...
	I0729 10:58:56.308009    8777 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:58:56.308124    8777 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 10:58:56.309274    8777 out.go:298] Setting JSON to false
	I0729 10:58:56.325997    8777 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5305,"bootTime":1722270631,"procs":452,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 10:58:56.326131    8777 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 10:58:56.332155    8777 out.go:177] * [calico-281000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 10:58:56.340113    8777 out.go:177]   - MINIKUBE_LOCATION=19339
	I0729 10:58:56.340189    8777 notify.go:220] Checking for updates...
	I0729 10:58:56.348090    8777 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	I0729 10:58:56.351128    8777 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 10:58:56.354944    8777 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:58:56.358083    8777 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	I0729 10:58:56.361121    8777 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 10:58:56.364487    8777 config.go:182] Loaded profile config "multinode-263000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:58:56.364556    8777 config.go:182] Loaded profile config "stopped-upgrade-294000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 10:58:56.364601    8777 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:58:56.368087    8777 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 10:58:56.375087    8777 start.go:297] selected driver: qemu2
	I0729 10:58:56.375095    8777 start.go:901] validating driver "qemu2" against <nil>
	I0729 10:58:56.375102    8777 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:58:56.377363    8777 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 10:58:56.381108    8777 out.go:177] * Automatically selected the socket_vmnet network
	I0729 10:58:56.384135    8777 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 10:58:56.384150    8777 cni.go:84] Creating CNI manager for "calico"
	I0729 10:58:56.384156    8777 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0729 10:58:56.384186    8777 start.go:340] cluster config:
	{Name:calico-281000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:calico-281000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:58:56.387385    8777 iso.go:125] acquiring lock: {Name:mk2808e0b9510c77af2c0862d3450f3cc996acba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:58:56.395082    8777 out.go:177] * Starting "calico-281000" primary control-plane node in "calico-281000" cluster
	I0729 10:58:56.399141    8777 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 10:58:56.399155    8777 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 10:58:56.399165    8777 cache.go:56] Caching tarball of preloaded images
	I0729 10:58:56.399221    8777 preload.go:172] Found /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 10:58:56.399226    8777 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 10:58:56.399278    8777 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/calico-281000/config.json ...
	I0729 10:58:56.399288    8777 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/calico-281000/config.json: {Name:mk3e58a84a2ef8fe5cee0d0aa933333e73b64861 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:58:56.399682    8777 start.go:360] acquireMachinesLock for calico-281000: {Name:mk9e50f97d2386769ca02400541d9347efa17b63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:58:56.399712    8777 start.go:364] duration metric: took 24.709µs to acquireMachinesLock for "calico-281000"
	I0729 10:58:56.399723    8777 start.go:93] Provisioning new machine with config: &{Name:calico-281000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:calico-281000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:58:56.399760    8777 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 10:58:56.407107    8777 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 10:58:56.422249    8777 start.go:159] libmachine.API.Create for "calico-281000" (driver="qemu2")
	I0729 10:58:56.422279    8777 client.go:168] LocalClient.Create starting
	I0729 10:58:56.422346    8777 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/ca.pem
	I0729 10:58:56.422379    8777 main.go:141] libmachine: Decoding PEM data...
	I0729 10:58:56.422388    8777 main.go:141] libmachine: Parsing certificate...
	I0729 10:58:56.422432    8777 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/cert.pem
	I0729 10:58:56.422454    8777 main.go:141] libmachine: Decoding PEM data...
	I0729 10:58:56.422461    8777 main.go:141] libmachine: Parsing certificate...
	I0729 10:58:56.422796    8777 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19339-6071/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0729 10:58:56.575988    8777 main.go:141] libmachine: Creating SSH key...
	I0729 10:58:56.632649    8777 main.go:141] libmachine: Creating Disk image...
	I0729 10:58:56.632659    8777 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 10:58:56.632876    8777 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/calico-281000/disk.qcow2.raw /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/calico-281000/disk.qcow2
	I0729 10:58:56.642437    8777 main.go:141] libmachine: STDOUT: 
	I0729 10:58:56.642457    8777 main.go:141] libmachine: STDERR: 
	I0729 10:58:56.642511    8777 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/calico-281000/disk.qcow2 +20000M
	I0729 10:58:56.650698    8777 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 10:58:56.650714    8777 main.go:141] libmachine: STDERR: 
	I0729 10:58:56.650729    8777 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/calico-281000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/calico-281000/disk.qcow2
	I0729 10:58:56.650733    8777 main.go:141] libmachine: Starting QEMU VM...
	I0729 10:58:56.650747    8777 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:58:56.650774    8777 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/calico-281000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/calico-281000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/calico-281000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:d4:79:12:9b:6b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/calico-281000/disk.qcow2
	I0729 10:58:56.652426    8777 main.go:141] libmachine: STDOUT: 
	I0729 10:58:56.652441    8777 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:58:56.652461    8777 client.go:171] duration metric: took 230.180917ms to LocalClient.Create
	I0729 10:58:58.654549    8777 start.go:128] duration metric: took 2.254806459s to createHost
	I0729 10:58:58.654586    8777 start.go:83] releasing machines lock for "calico-281000", held for 2.254907208s
	W0729 10:58:58.654607    8777 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:58:58.663282    8777 out.go:177] * Deleting "calico-281000" in qemu2 ...
	W0729 10:58:58.676025    8777 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:58:58.676041    8777 start.go:729] Will try again in 5 seconds ...
	I0729 10:59:03.678172    8777 start.go:360] acquireMachinesLock for calico-281000: {Name:mk9e50f97d2386769ca02400541d9347efa17b63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:59:03.678637    8777 start.go:364] duration metric: took 391.625µs to acquireMachinesLock for "calico-281000"
	I0729 10:59:03.678737    8777 start.go:93] Provisioning new machine with config: &{Name:calico-281000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:calico-281000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:59:03.678969    8777 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 10:59:03.683357    8777 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 10:59:03.725017    8777 start.go:159] libmachine.API.Create for "calico-281000" (driver="qemu2")
	I0729 10:59:03.725075    8777 client.go:168] LocalClient.Create starting
	I0729 10:59:03.725197    8777 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/ca.pem
	I0729 10:59:03.725266    8777 main.go:141] libmachine: Decoding PEM data...
	I0729 10:59:03.725288    8777 main.go:141] libmachine: Parsing certificate...
	I0729 10:59:03.725343    8777 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/cert.pem
	I0729 10:59:03.725399    8777 main.go:141] libmachine: Decoding PEM data...
	I0729 10:59:03.725413    8777 main.go:141] libmachine: Parsing certificate...
	I0729 10:59:03.726159    8777 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19339-6071/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0729 10:59:03.882918    8777 main.go:141] libmachine: Creating SSH key...
	I0729 10:59:03.947039    8777 main.go:141] libmachine: Creating Disk image...
	I0729 10:59:03.947049    8777 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 10:59:03.947252    8777 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/calico-281000/disk.qcow2.raw /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/calico-281000/disk.qcow2
	I0729 10:59:03.956461    8777 main.go:141] libmachine: STDOUT: 
	I0729 10:59:03.956492    8777 main.go:141] libmachine: STDERR: 
	I0729 10:59:03.956546    8777 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/calico-281000/disk.qcow2 +20000M
	I0729 10:59:03.964878    8777 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 10:59:03.964897    8777 main.go:141] libmachine: STDERR: 
	I0729 10:59:03.964906    8777 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/calico-281000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/calico-281000/disk.qcow2
	I0729 10:59:03.964913    8777 main.go:141] libmachine: Starting QEMU VM...
	I0729 10:59:03.964935    8777 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:59:03.964965    8777 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/calico-281000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/calico-281000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/calico-281000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:77:e1:14:b2:43 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/calico-281000/disk.qcow2
	I0729 10:59:03.966683    8777 main.go:141] libmachine: STDOUT: 
	I0729 10:59:03.966700    8777 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:59:03.966714    8777 client.go:171] duration metric: took 241.638083ms to LocalClient.Create
	I0729 10:59:05.968859    8777 start.go:128] duration metric: took 2.2898885s to createHost
	I0729 10:59:05.968922    8777 start.go:83] releasing machines lock for "calico-281000", held for 2.290306667s
	W0729 10:59:05.969398    8777 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-281000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-281000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:59:05.978391    8777 out.go:177] 
	W0729 10:59:05.984422    8777 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 10:59:05.984438    8777 out.go:239] * 
	* 
	W0729 10:59:05.986017    8777 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 10:59:05.994254    8777 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-281000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-281000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.834180542s)

                                                
                                                
-- stdout --
	* [custom-flannel-281000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19339
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-281000" primary control-plane node in "custom-flannel-281000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-281000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:59:08.412552    8894 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:59:08.412679    8894 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:59:08.412682    8894 out.go:304] Setting ErrFile to fd 2...
	I0729 10:59:08.412688    8894 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:59:08.412817    8894 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 10:59:08.413854    8894 out.go:298] Setting JSON to false
	I0729 10:59:08.430283    8894 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5317,"bootTime":1722270631,"procs":451,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 10:59:08.430352    8894 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 10:59:08.435548    8894 out.go:177] * [custom-flannel-281000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 10:59:08.443582    8894 out.go:177]   - MINIKUBE_LOCATION=19339
	I0729 10:59:08.443653    8894 notify.go:220] Checking for updates...
	I0729 10:59:08.451496    8894 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	I0729 10:59:08.455503    8894 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 10:59:08.459538    8894 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:59:08.462555    8894 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	I0729 10:59:08.465507    8894 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 10:59:08.468796    8894 config.go:182] Loaded profile config "multinode-263000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:59:08.468872    8894 config.go:182] Loaded profile config "stopped-upgrade-294000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 10:59:08.468918    8894 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:59:08.472513    8894 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 10:59:08.479485    8894 start.go:297] selected driver: qemu2
	I0729 10:59:08.479493    8894 start.go:901] validating driver "qemu2" against <nil>
	I0729 10:59:08.479500    8894 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:59:08.481811    8894 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 10:59:08.485476    8894 out.go:177] * Automatically selected the socket_vmnet network
	I0729 10:59:08.488587    8894 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 10:59:08.488603    8894 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0729 10:59:08.488609    8894 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0729 10:59:08.488638    8894 start.go:340] cluster config:
	{Name:custom-flannel-281000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-281000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:59:08.492295    8894 iso.go:125] acquiring lock: {Name:mk2808e0b9510c77af2c0862d3450f3cc996acba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:59:08.500501    8894 out.go:177] * Starting "custom-flannel-281000" primary control-plane node in "custom-flannel-281000" cluster
	I0729 10:59:08.504622    8894 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 10:59:08.504640    8894 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 10:59:08.504652    8894 cache.go:56] Caching tarball of preloaded images
	I0729 10:59:08.504716    8894 preload.go:172] Found /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 10:59:08.504722    8894 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 10:59:08.504791    8894 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/custom-flannel-281000/config.json ...
	I0729 10:59:08.504810    8894 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/custom-flannel-281000/config.json: {Name:mk757d212b583b1d0f0fca730f73ad480b9eeaa1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:59:08.505187    8894 start.go:360] acquireMachinesLock for custom-flannel-281000: {Name:mk9e50f97d2386769ca02400541d9347efa17b63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:59:08.505218    8894 start.go:364] duration metric: took 24.666µs to acquireMachinesLock for "custom-flannel-281000"
	I0729 10:59:08.505229    8894 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-281000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-281000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:59:08.505256    8894 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 10:59:08.514460    8894 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 10:59:08.529649    8894 start.go:159] libmachine.API.Create for "custom-flannel-281000" (driver="qemu2")
	I0729 10:59:08.529673    8894 client.go:168] LocalClient.Create starting
	I0729 10:59:08.529748    8894 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/ca.pem
	I0729 10:59:08.529779    8894 main.go:141] libmachine: Decoding PEM data...
	I0729 10:59:08.529789    8894 main.go:141] libmachine: Parsing certificate...
	I0729 10:59:08.529829    8894 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/cert.pem
	I0729 10:59:08.529852    8894 main.go:141] libmachine: Decoding PEM data...
	I0729 10:59:08.529864    8894 main.go:141] libmachine: Parsing certificate...
	I0729 10:59:08.530218    8894 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19339-6071/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0729 10:59:08.692909    8894 main.go:141] libmachine: Creating SSH key...
	I0729 10:59:08.825287    8894 main.go:141] libmachine: Creating Disk image...
	I0729 10:59:08.825293    8894 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 10:59:08.825494    8894 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/custom-flannel-281000/disk.qcow2.raw /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/custom-flannel-281000/disk.qcow2
	I0729 10:59:08.834920    8894 main.go:141] libmachine: STDOUT: 
	I0729 10:59:08.834938    8894 main.go:141] libmachine: STDERR: 
	I0729 10:59:08.834988    8894 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/custom-flannel-281000/disk.qcow2 +20000M
	I0729 10:59:08.842921    8894 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 10:59:08.842935    8894 main.go:141] libmachine: STDERR: 
	I0729 10:59:08.842960    8894 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/custom-flannel-281000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/custom-flannel-281000/disk.qcow2
	I0729 10:59:08.842964    8894 main.go:141] libmachine: Starting QEMU VM...
	I0729 10:59:08.842977    8894 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:59:08.843001    8894 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/custom-flannel-281000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/custom-flannel-281000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/custom-flannel-281000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:1d:af:1e:a7:1c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/custom-flannel-281000/disk.qcow2
	I0729 10:59:08.844699    8894 main.go:141] libmachine: STDOUT: 
	I0729 10:59:08.844717    8894 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:59:08.844736    8894 client.go:171] duration metric: took 315.060334ms to LocalClient.Create
	I0729 10:59:10.846774    8894 start.go:128] duration metric: took 2.341549584s to createHost
	I0729 10:59:10.846798    8894 start.go:83] releasing machines lock for "custom-flannel-281000", held for 2.341616125s
	W0729 10:59:10.846813    8894 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:59:10.854403    8894 out.go:177] * Deleting "custom-flannel-281000" in qemu2 ...
	W0729 10:59:10.868856    8894 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:59:10.868865    8894 start.go:729] Will try again in 5 seconds ...
	I0729 10:59:15.871008    8894 start.go:360] acquireMachinesLock for custom-flannel-281000: {Name:mk9e50f97d2386769ca02400541d9347efa17b63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:59:15.871533    8894 start.go:364] duration metric: took 438.625µs to acquireMachinesLock for "custom-flannel-281000"
	I0729 10:59:15.871676    8894 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-281000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-281000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:59:15.871952    8894 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 10:59:15.881565    8894 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 10:59:15.933542    8894 start.go:159] libmachine.API.Create for "custom-flannel-281000" (driver="qemu2")
	I0729 10:59:15.933601    8894 client.go:168] LocalClient.Create starting
	I0729 10:59:15.933713    8894 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/ca.pem
	I0729 10:59:15.933780    8894 main.go:141] libmachine: Decoding PEM data...
	I0729 10:59:15.933799    8894 main.go:141] libmachine: Parsing certificate...
	I0729 10:59:15.933855    8894 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/cert.pem
	I0729 10:59:15.933900    8894 main.go:141] libmachine: Decoding PEM data...
	I0729 10:59:15.933918    8894 main.go:141] libmachine: Parsing certificate...
	I0729 10:59:15.934439    8894 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19339-6071/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0729 10:59:16.093031    8894 main.go:141] libmachine: Creating SSH key...
	I0729 10:59:16.154342    8894 main.go:141] libmachine: Creating Disk image...
	I0729 10:59:16.154349    8894 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 10:59:16.154569    8894 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/custom-flannel-281000/disk.qcow2.raw /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/custom-flannel-281000/disk.qcow2
	I0729 10:59:16.163769    8894 main.go:141] libmachine: STDOUT: 
	I0729 10:59:16.163787    8894 main.go:141] libmachine: STDERR: 
	I0729 10:59:16.163833    8894 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/custom-flannel-281000/disk.qcow2 +20000M
	I0729 10:59:16.171864    8894 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 10:59:16.171885    8894 main.go:141] libmachine: STDERR: 
	I0729 10:59:16.171897    8894 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/custom-flannel-281000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/custom-flannel-281000/disk.qcow2
	I0729 10:59:16.171902    8894 main.go:141] libmachine: Starting QEMU VM...
	I0729 10:59:16.171921    8894 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:59:16.171955    8894 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/custom-flannel-281000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/custom-flannel-281000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/custom-flannel-281000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:66:7d:4b:f2:1d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/custom-flannel-281000/disk.qcow2
	I0729 10:59:16.173618    8894 main.go:141] libmachine: STDOUT: 
	I0729 10:59:16.173633    8894 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:59:16.173646    8894 client.go:171] duration metric: took 240.044167ms to LocalClient.Create
	I0729 10:59:18.175816    8894 start.go:128] duration metric: took 2.303865125s to createHost
	I0729 10:59:18.175911    8894 start.go:83] releasing machines lock for "custom-flannel-281000", held for 2.304391125s
	W0729 10:59:18.176521    8894 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-281000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-281000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:59:18.185097    8894 out.go:177] 
	W0729 10:59:18.192205    8894 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 10:59:18.192274    8894 out.go:239] * 
	* 
	W0729 10:59:18.195095    8894 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 10:59:18.204078    8894 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-281000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-281000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.7676705s)

                                                
                                                
-- stdout --
	* [false-281000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19339
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-281000" primary control-plane node in "false-281000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-281000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:59:20.626777    9013 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:59:20.626908    9013 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:59:20.626912    9013 out.go:304] Setting ErrFile to fd 2...
	I0729 10:59:20.626914    9013 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:59:20.627030    9013 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 10:59:20.628115    9013 out.go:298] Setting JSON to false
	I0729 10:59:20.644600    9013 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5329,"bootTime":1722270631,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 10:59:20.644666    9013 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 10:59:20.650191    9013 out.go:177] * [false-281000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 10:59:20.658201    9013 out.go:177]   - MINIKUBE_LOCATION=19339
	I0729 10:59:20.658264    9013 notify.go:220] Checking for updates...
	I0729 10:59:20.665104    9013 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	I0729 10:59:20.668150    9013 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 10:59:20.671173    9013 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:59:20.674182    9013 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	I0729 10:59:20.677175    9013 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 10:59:20.680464    9013 config.go:182] Loaded profile config "multinode-263000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:59:20.680532    9013 config.go:182] Loaded profile config "stopped-upgrade-294000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 10:59:20.680581    9013 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:59:20.684126    9013 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 10:59:20.691195    9013 start.go:297] selected driver: qemu2
	I0729 10:59:20.691202    9013 start.go:901] validating driver "qemu2" against <nil>
	I0729 10:59:20.691207    9013 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:59:20.693483    9013 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 10:59:20.696165    9013 out.go:177] * Automatically selected the socket_vmnet network
	I0729 10:59:20.699229    9013 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 10:59:20.699249    9013 cni.go:84] Creating CNI manager for "false"
	I0729 10:59:20.699275    9013 start.go:340] cluster config:
	{Name:false-281000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:false-281000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:59:20.702653    9013 iso.go:125] acquiring lock: {Name:mk2808e0b9510c77af2c0862d3450f3cc996acba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:59:20.710103    9013 out.go:177] * Starting "false-281000" primary control-plane node in "false-281000" cluster
	I0729 10:59:20.714189    9013 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 10:59:20.714203    9013 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 10:59:20.714211    9013 cache.go:56] Caching tarball of preloaded images
	I0729 10:59:20.714268    9013 preload.go:172] Found /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 10:59:20.714274    9013 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 10:59:20.714325    9013 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/false-281000/config.json ...
	I0729 10:59:20.714339    9013 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/false-281000/config.json: {Name:mk98ec0ea6ffab3022521a6ac680b22747108d12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:59:20.714553    9013 start.go:360] acquireMachinesLock for false-281000: {Name:mk9e50f97d2386769ca02400541d9347efa17b63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:59:20.714584    9013 start.go:364] duration metric: took 25.75µs to acquireMachinesLock for "false-281000"
	I0729 10:59:20.714595    9013 start.go:93] Provisioning new machine with config: &{Name:false-281000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:false-281000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:59:20.714625    9013 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 10:59:20.722144    9013 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 10:59:20.737440    9013 start.go:159] libmachine.API.Create for "false-281000" (driver="qemu2")
	I0729 10:59:20.737475    9013 client.go:168] LocalClient.Create starting
	I0729 10:59:20.737541    9013 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/ca.pem
	I0729 10:59:20.737575    9013 main.go:141] libmachine: Decoding PEM data...
	I0729 10:59:20.737588    9013 main.go:141] libmachine: Parsing certificate...
	I0729 10:59:20.737631    9013 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/cert.pem
	I0729 10:59:20.737653    9013 main.go:141] libmachine: Decoding PEM data...
	I0729 10:59:20.737663    9013 main.go:141] libmachine: Parsing certificate...
	I0729 10:59:20.738070    9013 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19339-6071/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0729 10:59:20.888565    9013 main.go:141] libmachine: Creating SSH key...
	I0729 10:59:20.949365    9013 main.go:141] libmachine: Creating Disk image...
	I0729 10:59:20.949370    9013 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 10:59:20.949569    9013 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/false-281000/disk.qcow2.raw /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/false-281000/disk.qcow2
	I0729 10:59:20.958577    9013 main.go:141] libmachine: STDOUT: 
	I0729 10:59:20.958596    9013 main.go:141] libmachine: STDERR: 
	I0729 10:59:20.958640    9013 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/false-281000/disk.qcow2 +20000M
	I0729 10:59:20.966800    9013 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 10:59:20.966816    9013 main.go:141] libmachine: STDERR: 
	I0729 10:59:20.966833    9013 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/false-281000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/false-281000/disk.qcow2
	I0729 10:59:20.966837    9013 main.go:141] libmachine: Starting QEMU VM...
	I0729 10:59:20.966853    9013 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:59:20.966885    9013 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/false-281000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/false-281000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/false-281000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:fb:72:9c:c5:ef -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/false-281000/disk.qcow2
	I0729 10:59:20.968609    9013 main.go:141] libmachine: STDOUT: 
	I0729 10:59:20.968623    9013 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:59:20.968642    9013 client.go:171] duration metric: took 231.165917ms to LocalClient.Create
	I0729 10:59:22.970796    9013 start.go:128] duration metric: took 2.256185s to createHost
	I0729 10:59:22.970904    9013 start.go:83] releasing machines lock for "false-281000", held for 2.256345708s
	W0729 10:59:22.970961    9013 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:59:22.981037    9013 out.go:177] * Deleting "false-281000" in qemu2 ...
	W0729 10:59:23.007142    9013 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:59:23.007176    9013 start.go:729] Will try again in 5 seconds ...
	I0729 10:59:28.009270    9013 start.go:360] acquireMachinesLock for false-281000: {Name:mk9e50f97d2386769ca02400541d9347efa17b63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:59:28.009909    9013 start.go:364] duration metric: took 488.834µs to acquireMachinesLock for "false-281000"
	I0729 10:59:28.010041    9013 start.go:93] Provisioning new machine with config: &{Name:false-281000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:false-281000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:59:28.010355    9013 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 10:59:28.018792    9013 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 10:59:28.068602    9013 start.go:159] libmachine.API.Create for "false-281000" (driver="qemu2")
	I0729 10:59:28.068656    9013 client.go:168] LocalClient.Create starting
	I0729 10:59:28.068763    9013 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/ca.pem
	I0729 10:59:28.068839    9013 main.go:141] libmachine: Decoding PEM data...
	I0729 10:59:28.068858    9013 main.go:141] libmachine: Parsing certificate...
	I0729 10:59:28.068918    9013 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/cert.pem
	I0729 10:59:28.068962    9013 main.go:141] libmachine: Decoding PEM data...
	I0729 10:59:28.068984    9013 main.go:141] libmachine: Parsing certificate...
	I0729 10:59:28.069511    9013 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19339-6071/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0729 10:59:28.228648    9013 main.go:141] libmachine: Creating SSH key...
	I0729 10:59:28.306725    9013 main.go:141] libmachine: Creating Disk image...
	I0729 10:59:28.306733    9013 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 10:59:28.306954    9013 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/false-281000/disk.qcow2.raw /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/false-281000/disk.qcow2
	I0729 10:59:28.316143    9013 main.go:141] libmachine: STDOUT: 
	I0729 10:59:28.316165    9013 main.go:141] libmachine: STDERR: 
	I0729 10:59:28.316223    9013 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/false-281000/disk.qcow2 +20000M
	I0729 10:59:28.324097    9013 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 10:59:28.324113    9013 main.go:141] libmachine: STDERR: 
	I0729 10:59:28.324124    9013 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/false-281000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/false-281000/disk.qcow2
	I0729 10:59:28.324142    9013 main.go:141] libmachine: Starting QEMU VM...
	I0729 10:59:28.324151    9013 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:59:28.324189    9013 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/false-281000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/false-281000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/false-281000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:83:33:5d:4f:9d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/false-281000/disk.qcow2
	I0729 10:59:28.325843    9013 main.go:141] libmachine: STDOUT: 
	I0729 10:59:28.325859    9013 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:59:28.325871    9013 client.go:171] duration metric: took 257.213834ms to LocalClient.Create
	I0729 10:59:30.328018    9013 start.go:128] duration metric: took 2.317667292s to createHost
	I0729 10:59:30.328118    9013 start.go:83] releasing machines lock for "false-281000", held for 2.318179s
	W0729 10:59:30.328461    9013 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-281000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-281000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:59:30.337000    9013 out.go:177] 
	W0729 10:59:30.343145    9013 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 10:59:30.343179    9013 out.go:239] * 
	* 
	W0729 10:59:30.344985    9013 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 10:59:30.353865    9013 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-281000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-281000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.693857042s)

                                                
                                                
-- stdout --
	* [enable-default-cni-281000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19339
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-281000" primary control-plane node in "enable-default-cni-281000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-281000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:59:32.507525    9122 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:59:32.507679    9122 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:59:32.507682    9122 out.go:304] Setting ErrFile to fd 2...
	I0729 10:59:32.507685    9122 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:59:32.507821    9122 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 10:59:32.508894    9122 out.go:298] Setting JSON to false
	I0729 10:59:32.525514    9122 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5341,"bootTime":1722270631,"procs":451,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 10:59:32.525589    9122 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 10:59:32.528959    9122 out.go:177] * [enable-default-cni-281000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 10:59:32.536619    9122 out.go:177]   - MINIKUBE_LOCATION=19339
	I0729 10:59:32.536606    9122 notify.go:220] Checking for updates...
	I0729 10:59:32.539499    9122 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	I0729 10:59:32.542493    9122 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 10:59:32.545492    9122 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:59:32.548431    9122 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	I0729 10:59:32.551493    9122 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 10:59:32.554841    9122 config.go:182] Loaded profile config "multinode-263000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:59:32.554906    9122 config.go:182] Loaded profile config "stopped-upgrade-294000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 10:59:32.554952    9122 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:59:32.558482    9122 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 10:59:32.565565    9122 start.go:297] selected driver: qemu2
	I0729 10:59:32.565576    9122 start.go:901] validating driver "qemu2" against <nil>
	I0729 10:59:32.565591    9122 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:59:32.567843    9122 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 10:59:32.569137    9122 out.go:177] * Automatically selected the socket_vmnet network
	E0729 10:59:32.572524    9122 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0729 10:59:32.572536    9122 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 10:59:32.572549    9122 cni.go:84] Creating CNI manager for "bridge"
	I0729 10:59:32.572552    9122 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 10:59:32.572576    9122 start.go:340] cluster config:
	{Name:enable-default-cni-281000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-281000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:59:32.575870    9122 iso.go:125] acquiring lock: {Name:mk2808e0b9510c77af2c0862d3450f3cc996acba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:59:32.584422    9122 out.go:177] * Starting "enable-default-cni-281000" primary control-plane node in "enable-default-cni-281000" cluster
	I0729 10:59:32.588531    9122 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 10:59:32.588548    9122 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 10:59:32.588556    9122 cache.go:56] Caching tarball of preloaded images
	I0729 10:59:32.588612    9122 preload.go:172] Found /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 10:59:32.588617    9122 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 10:59:32.588666    9122 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/enable-default-cni-281000/config.json ...
	I0729 10:59:32.588677    9122 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/enable-default-cni-281000/config.json: {Name:mkd2a4bab931fd12ccbb90d7d89ca02f32f41797 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:59:32.588887    9122 start.go:360] acquireMachinesLock for enable-default-cni-281000: {Name:mk9e50f97d2386769ca02400541d9347efa17b63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:59:32.588922    9122 start.go:364] duration metric: took 26.292µs to acquireMachinesLock for "enable-default-cni-281000"
	I0729 10:59:32.588935    9122 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-281000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-281000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:59:32.588959    9122 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 10:59:32.596539    9122 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 10:59:32.611436    9122 start.go:159] libmachine.API.Create for "enable-default-cni-281000" (driver="qemu2")
	I0729 10:59:32.611462    9122 client.go:168] LocalClient.Create starting
	I0729 10:59:32.611523    9122 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/ca.pem
	I0729 10:59:32.611555    9122 main.go:141] libmachine: Decoding PEM data...
	I0729 10:59:32.611562    9122 main.go:141] libmachine: Parsing certificate...
	I0729 10:59:32.611599    9122 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/cert.pem
	I0729 10:59:32.611622    9122 main.go:141] libmachine: Decoding PEM data...
	I0729 10:59:32.611634    9122 main.go:141] libmachine: Parsing certificate...
	I0729 10:59:32.612014    9122 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19339-6071/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0729 10:59:32.761607    9122 main.go:141] libmachine: Creating SSH key...
	I0729 10:59:32.810053    9122 main.go:141] libmachine: Creating Disk image...
	I0729 10:59:32.810059    9122 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 10:59:32.810255    9122 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/enable-default-cni-281000/disk.qcow2.raw /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/enable-default-cni-281000/disk.qcow2
	I0729 10:59:32.819883    9122 main.go:141] libmachine: STDOUT: 
	I0729 10:59:32.819904    9122 main.go:141] libmachine: STDERR: 
	I0729 10:59:32.819966    9122 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/enable-default-cni-281000/disk.qcow2 +20000M
	I0729 10:59:32.828227    9122 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 10:59:32.828241    9122 main.go:141] libmachine: STDERR: 
	I0729 10:59:32.828267    9122 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/enable-default-cni-281000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/enable-default-cni-281000/disk.qcow2
	I0729 10:59:32.828272    9122 main.go:141] libmachine: Starting QEMU VM...
	I0729 10:59:32.828286    9122 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:59:32.828312    9122 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/enable-default-cni-281000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/enable-default-cni-281000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/enable-default-cni-281000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:ea:cb:84:5c:6e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/enable-default-cni-281000/disk.qcow2
	I0729 10:59:32.830020    9122 main.go:141] libmachine: STDOUT: 
	I0729 10:59:32.830036    9122 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:59:32.830053    9122 client.go:171] duration metric: took 218.590875ms to LocalClient.Create
	I0729 10:59:34.832114    9122 start.go:128] duration metric: took 2.24318175s to createHost
	I0729 10:59:34.832139    9122 start.go:83] releasing machines lock for "enable-default-cni-281000", held for 2.243249417s
	W0729 10:59:34.832191    9122 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:59:34.842087    9122 out.go:177] * Deleting "enable-default-cni-281000" in qemu2 ...
	W0729 10:59:34.860458    9122 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:59:34.860469    9122 start.go:729] Will try again in 5 seconds ...
	I0729 10:59:39.862713    9122 start.go:360] acquireMachinesLock for enable-default-cni-281000: {Name:mk9e50f97d2386769ca02400541d9347efa17b63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:59:39.863362    9122 start.go:364] duration metric: took 525.708µs to acquireMachinesLock for "enable-default-cni-281000"
	I0729 10:59:39.863499    9122 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-281000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-281000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:59:39.863753    9122 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 10:59:39.873268    9122 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 10:59:39.922114    9122 start.go:159] libmachine.API.Create for "enable-default-cni-281000" (driver="qemu2")
	I0729 10:59:39.922169    9122 client.go:168] LocalClient.Create starting
	I0729 10:59:39.922283    9122 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/ca.pem
	I0729 10:59:39.922352    9122 main.go:141] libmachine: Decoding PEM data...
	I0729 10:59:39.922366    9122 main.go:141] libmachine: Parsing certificate...
	I0729 10:59:39.922427    9122 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/cert.pem
	I0729 10:59:39.922471    9122 main.go:141] libmachine: Decoding PEM data...
	I0729 10:59:39.922481    9122 main.go:141] libmachine: Parsing certificate...
	I0729 10:59:39.923238    9122 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19339-6071/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0729 10:59:40.082846    9122 main.go:141] libmachine: Creating SSH key...
	I0729 10:59:40.112272    9122 main.go:141] libmachine: Creating Disk image...
	I0729 10:59:40.112280    9122 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 10:59:40.112480    9122 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/enable-default-cni-281000/disk.qcow2.raw /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/enable-default-cni-281000/disk.qcow2
	I0729 10:59:40.121862    9122 main.go:141] libmachine: STDOUT: 
	I0729 10:59:40.121881    9122 main.go:141] libmachine: STDERR: 
	I0729 10:59:40.121938    9122 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/enable-default-cni-281000/disk.qcow2 +20000M
	I0729 10:59:40.129987    9122 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 10:59:40.130002    9122 main.go:141] libmachine: STDERR: 
	I0729 10:59:40.130019    9122 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/enable-default-cni-281000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/enable-default-cni-281000/disk.qcow2
	I0729 10:59:40.130031    9122 main.go:141] libmachine: Starting QEMU VM...
	I0729 10:59:40.130040    9122 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:59:40.130074    9122 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/enable-default-cni-281000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/enable-default-cni-281000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/enable-default-cni-281000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:6e:46:79:ec:b0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/enable-default-cni-281000/disk.qcow2
	I0729 10:59:40.131789    9122 main.go:141] libmachine: STDOUT: 
	I0729 10:59:40.131803    9122 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:59:40.131814    9122 client.go:171] duration metric: took 209.643ms to LocalClient.Create
	I0729 10:59:42.134026    9122 start.go:128] duration metric: took 2.270281333s to createHost
	I0729 10:59:42.134079    9122 start.go:83] releasing machines lock for "enable-default-cni-281000", held for 2.270734791s
	W0729 10:59:42.134329    9122 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-281000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-281000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:59:42.145674    9122 out.go:177] 
	W0729 10:59:42.148892    9122 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 10:59:42.148909    9122 out.go:239] * 
	* 
	W0729 10:59:42.150272    9122 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 10:59:42.161823    9122 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-281000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-281000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.914084667s)

                                                
                                                
-- stdout --
	* [flannel-281000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19339
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-281000" primary control-plane node in "flannel-281000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-281000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:59:44.314791    9231 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:59:44.314937    9231 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:59:44.314941    9231 out.go:304] Setting ErrFile to fd 2...
	I0729 10:59:44.314943    9231 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:59:44.315072    9231 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 10:59:44.316234    9231 out.go:298] Setting JSON to false
	I0729 10:59:44.332397    9231 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5353,"bootTime":1722270631,"procs":450,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 10:59:44.332462    9231 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 10:59:44.338371    9231 out.go:177] * [flannel-281000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 10:59:44.346287    9231 out.go:177]   - MINIKUBE_LOCATION=19339
	I0729 10:59:44.346320    9231 notify.go:220] Checking for updates...
	I0729 10:59:44.353345    9231 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	I0729 10:59:44.356357    9231 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 10:59:44.359308    9231 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:59:44.362375    9231 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	I0729 10:59:44.365277    9231 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 10:59:44.368648    9231 config.go:182] Loaded profile config "multinode-263000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:59:44.368717    9231 config.go:182] Loaded profile config "stopped-upgrade-294000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 10:59:44.368760    9231 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:59:44.373333    9231 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 10:59:44.380321    9231 start.go:297] selected driver: qemu2
	I0729 10:59:44.380327    9231 start.go:901] validating driver "qemu2" against <nil>
	I0729 10:59:44.380333    9231 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:59:44.382607    9231 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 10:59:44.386321    9231 out.go:177] * Automatically selected the socket_vmnet network
	I0729 10:59:44.389375    9231 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 10:59:44.389397    9231 cni.go:84] Creating CNI manager for "flannel"
	I0729 10:59:44.389417    9231 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0729 10:59:44.389458    9231 start.go:340] cluster config:
	{Name:flannel-281000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:flannel-281000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:59:44.393224    9231 iso.go:125] acquiring lock: {Name:mk2808e0b9510c77af2c0862d3450f3cc996acba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:59:44.401372    9231 out.go:177] * Starting "flannel-281000" primary control-plane node in "flannel-281000" cluster
	I0729 10:59:44.405331    9231 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 10:59:44.405344    9231 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 10:59:44.405354    9231 cache.go:56] Caching tarball of preloaded images
	I0729 10:59:44.405412    9231 preload.go:172] Found /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 10:59:44.405417    9231 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 10:59:44.405477    9231 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/flannel-281000/config.json ...
	I0729 10:59:44.405488    9231 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/flannel-281000/config.json: {Name:mk2314067fd75952f6a843ee6b2abf4a567748b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:59:44.405705    9231 start.go:360] acquireMachinesLock for flannel-281000: {Name:mk9e50f97d2386769ca02400541d9347efa17b63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:59:44.405742    9231 start.go:364] duration metric: took 32.209µs to acquireMachinesLock for "flannel-281000"
	I0729 10:59:44.405756    9231 start.go:93] Provisioning new machine with config: &{Name:flannel-281000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:flannel-281000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:59:44.405787    9231 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 10:59:44.418302    9231 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 10:59:44.434697    9231 start.go:159] libmachine.API.Create for "flannel-281000" (driver="qemu2")
	I0729 10:59:44.434729    9231 client.go:168] LocalClient.Create starting
	I0729 10:59:44.434806    9231 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/ca.pem
	I0729 10:59:44.434835    9231 main.go:141] libmachine: Decoding PEM data...
	I0729 10:59:44.434845    9231 main.go:141] libmachine: Parsing certificate...
	I0729 10:59:44.434882    9231 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/cert.pem
	I0729 10:59:44.434903    9231 main.go:141] libmachine: Decoding PEM data...
	I0729 10:59:44.434911    9231 main.go:141] libmachine: Parsing certificate...
	I0729 10:59:44.435266    9231 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19339-6071/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0729 10:59:44.583846    9231 main.go:141] libmachine: Creating SSH key...
	I0729 10:59:44.677872    9231 main.go:141] libmachine: Creating Disk image...
	I0729 10:59:44.677878    9231 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 10:59:44.678081    9231 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/flannel-281000/disk.qcow2.raw /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/flannel-281000/disk.qcow2
	I0729 10:59:44.687641    9231 main.go:141] libmachine: STDOUT: 
	I0729 10:59:44.687655    9231 main.go:141] libmachine: STDERR: 
	I0729 10:59:44.687715    9231 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/flannel-281000/disk.qcow2 +20000M
	I0729 10:59:44.695618    9231 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 10:59:44.695634    9231 main.go:141] libmachine: STDERR: 
	I0729 10:59:44.695651    9231 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/flannel-281000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/flannel-281000/disk.qcow2
	I0729 10:59:44.695657    9231 main.go:141] libmachine: Starting QEMU VM...
	I0729 10:59:44.695670    9231 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:59:44.695706    9231 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/flannel-281000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/flannel-281000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/flannel-281000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:34:80:e6:14:a3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/flannel-281000/disk.qcow2
	I0729 10:59:44.697390    9231 main.go:141] libmachine: STDOUT: 
	I0729 10:59:44.697404    9231 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:59:44.697424    9231 client.go:171] duration metric: took 262.696209ms to LocalClient.Create
	I0729 10:59:46.699492    9231 start.go:128] duration metric: took 2.293733292s to createHost
	I0729 10:59:46.699516    9231 start.go:83] releasing machines lock for "flannel-281000", held for 2.293807584s
	W0729 10:59:46.699561    9231 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:59:46.711334    9231 out.go:177] * Deleting "flannel-281000" in qemu2 ...
	W0729 10:59:46.725607    9231 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:59:46.725618    9231 start.go:729] Will try again in 5 seconds ...
	I0729 10:59:51.727875    9231 start.go:360] acquireMachinesLock for flannel-281000: {Name:mk9e50f97d2386769ca02400541d9347efa17b63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:59:51.728492    9231 start.go:364] duration metric: took 465.209µs to acquireMachinesLock for "flannel-281000"
	I0729 10:59:51.728566    9231 start.go:93] Provisioning new machine with config: &{Name:flannel-281000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:flannel-281000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:59:51.728880    9231 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 10:59:51.737502    9231 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 10:59:51.787534    9231 start.go:159] libmachine.API.Create for "flannel-281000" (driver="qemu2")
	I0729 10:59:51.787596    9231 client.go:168] LocalClient.Create starting
	I0729 10:59:51.787720    9231 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/ca.pem
	I0729 10:59:51.787787    9231 main.go:141] libmachine: Decoding PEM data...
	I0729 10:59:51.787811    9231 main.go:141] libmachine: Parsing certificate...
	I0729 10:59:51.787877    9231 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/cert.pem
	I0729 10:59:51.787922    9231 main.go:141] libmachine: Decoding PEM data...
	I0729 10:59:51.787933    9231 main.go:141] libmachine: Parsing certificate...
	I0729 10:59:51.788453    9231 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19339-6071/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0729 10:59:51.941789    9231 main.go:141] libmachine: Creating SSH key...
	I0729 10:59:52.142064    9231 main.go:141] libmachine: Creating Disk image...
	I0729 10:59:52.142080    9231 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 10:59:52.142318    9231 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/flannel-281000/disk.qcow2.raw /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/flannel-281000/disk.qcow2
	I0729 10:59:52.152295    9231 main.go:141] libmachine: STDOUT: 
	I0729 10:59:52.152320    9231 main.go:141] libmachine: STDERR: 
	I0729 10:59:52.152397    9231 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/flannel-281000/disk.qcow2 +20000M
	I0729 10:59:52.160674    9231 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 10:59:52.160688    9231 main.go:141] libmachine: STDERR: 
	I0729 10:59:52.160708    9231 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/flannel-281000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/flannel-281000/disk.qcow2
	I0729 10:59:52.160716    9231 main.go:141] libmachine: Starting QEMU VM...
	I0729 10:59:52.160729    9231 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:59:52.160767    9231 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/flannel-281000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/flannel-281000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/flannel-281000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:77:f9:34:17:14 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/flannel-281000/disk.qcow2
	I0729 10:59:52.162409    9231 main.go:141] libmachine: STDOUT: 
	I0729 10:59:52.162422    9231 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:59:52.162439    9231 client.go:171] duration metric: took 374.842792ms to LocalClient.Create
	I0729 10:59:54.164530    9231 start.go:128] duration metric: took 2.435661042s to createHost
	I0729 10:59:54.164575    9231 start.go:83] releasing machines lock for "flannel-281000", held for 2.436102875s
	W0729 10:59:54.164814    9231 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-281000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-281000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:59:54.173301    9231 out.go:177] 
	W0729 10:59:54.179308    9231 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 10:59:54.179322    9231 out.go:239] * 
	* 
	W0729 10:59:54.180671    9231 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 10:59:54.190315    9231 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-281000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-281000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.900917917s)

                                                
                                                
-- stdout --
	* [bridge-281000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19339
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-281000" primary control-plane node in "bridge-281000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-281000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:59:56.496115    9349 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:59:56.496271    9349 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:59:56.496274    9349 out.go:304] Setting ErrFile to fd 2...
	I0729 10:59:56.496282    9349 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:59:56.496418    9349 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 10:59:56.497553    9349 out.go:298] Setting JSON to false
	I0729 10:59:56.513818    9349 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5365,"bootTime":1722270631,"procs":451,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 10:59:56.513899    9349 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 10:59:56.520335    9349 out.go:177] * [bridge-281000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 10:59:56.524325    9349 out.go:177]   - MINIKUBE_LOCATION=19339
	I0729 10:59:56.524361    9349 notify.go:220] Checking for updates...
	I0729 10:59:56.534273    9349 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	I0729 10:59:56.538219    9349 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 10:59:56.542080    9349 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:59:56.545262    9349 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	I0729 10:59:56.548267    9349 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 10:59:56.551725    9349 config.go:182] Loaded profile config "multinode-263000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:59:56.551791    9349 config.go:182] Loaded profile config "stopped-upgrade-294000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 10:59:56.551841    9349 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:59:56.555231    9349 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 10:59:56.562251    9349 start.go:297] selected driver: qemu2
	I0729 10:59:56.562258    9349 start.go:901] validating driver "qemu2" against <nil>
	I0729 10:59:56.562264    9349 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:59:56.564467    9349 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 10:59:56.568272    9349 out.go:177] * Automatically selected the socket_vmnet network
	I0729 10:59:56.571377    9349 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 10:59:56.571392    9349 cni.go:84] Creating CNI manager for "bridge"
	I0729 10:59:56.571396    9349 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 10:59:56.571433    9349 start.go:340] cluster config:
	{Name:bridge-281000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:bridge-281000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:59:56.575113    9349 iso.go:125] acquiring lock: {Name:mk2808e0b9510c77af2c0862d3450f3cc996acba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:59:56.584520    9349 out.go:177] * Starting "bridge-281000" primary control-plane node in "bridge-281000" cluster
	I0729 10:59:56.588307    9349 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 10:59:56.588323    9349 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 10:59:56.588331    9349 cache.go:56] Caching tarball of preloaded images
	I0729 10:59:56.588396    9349 preload.go:172] Found /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 10:59:56.588402    9349 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 10:59:56.588472    9349 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/bridge-281000/config.json ...
	I0729 10:59:56.588483    9349 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/bridge-281000/config.json: {Name:mke52611f21f155f2ac0525ba5b107f9b530e3db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:59:56.588864    9349 start.go:360] acquireMachinesLock for bridge-281000: {Name:mk9e50f97d2386769ca02400541d9347efa17b63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:59:56.588902    9349 start.go:364] duration metric: took 30.875µs to acquireMachinesLock for "bridge-281000"
	I0729 10:59:56.588915    9349 start.go:93] Provisioning new machine with config: &{Name:bridge-281000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:bridge-281000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 10:59:56.588958    9349 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 10:59:56.597246    9349 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 10:59:56.613951    9349 start.go:159] libmachine.API.Create for "bridge-281000" (driver="qemu2")
	I0729 10:59:56.613974    9349 client.go:168] LocalClient.Create starting
	I0729 10:59:56.614033    9349 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/ca.pem
	I0729 10:59:56.614065    9349 main.go:141] libmachine: Decoding PEM data...
	I0729 10:59:56.614075    9349 main.go:141] libmachine: Parsing certificate...
	I0729 10:59:56.614114    9349 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/cert.pem
	I0729 10:59:56.614137    9349 main.go:141] libmachine: Decoding PEM data...
	I0729 10:59:56.614145    9349 main.go:141] libmachine: Parsing certificate...
	I0729 10:59:56.614490    9349 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19339-6071/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0729 10:59:56.762821    9349 main.go:141] libmachine: Creating SSH key...
	I0729 10:59:56.866879    9349 main.go:141] libmachine: Creating Disk image...
	I0729 10:59:56.866886    9349 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 10:59:56.867100    9349 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/bridge-281000/disk.qcow2.raw /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/bridge-281000/disk.qcow2
	I0729 10:59:56.876422    9349 main.go:141] libmachine: STDOUT: 
	I0729 10:59:56.876454    9349 main.go:141] libmachine: STDERR: 
	I0729 10:59:56.876520    9349 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/bridge-281000/disk.qcow2 +20000M
	I0729 10:59:56.884571    9349 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 10:59:56.884585    9349 main.go:141] libmachine: STDERR: 
	I0729 10:59:56.884608    9349 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/bridge-281000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/bridge-281000/disk.qcow2
	I0729 10:59:56.884613    9349 main.go:141] libmachine: Starting QEMU VM...
	I0729 10:59:56.884629    9349 qemu.go:418] Using hvf for hardware acceleration
	I0729 10:59:56.884675    9349 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/bridge-281000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/bridge-281000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/bridge-281000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:f2:c6:9b:c7:a6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/bridge-281000/disk.qcow2
	I0729 10:59:56.886327    9349 main.go:141] libmachine: STDOUT: 
	I0729 10:59:56.886343    9349 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 10:59:56.886361    9349 client.go:171] duration metric: took 272.387042ms to LocalClient.Create
	I0729 10:59:58.888431    9349 start.go:128] duration metric: took 2.299497041s to createHost
	I0729 10:59:58.888455    9349 start.go:83] releasing machines lock for "bridge-281000", held for 2.299587s
	W0729 10:59:58.888484    9349 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:59:58.900295    9349 out.go:177] * Deleting "bridge-281000" in qemu2 ...
	W0729 10:59:58.914607    9349 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 10:59:58.914616    9349 start.go:729] Will try again in 5 seconds ...
	I0729 11:00:03.916835    9349 start.go:360] acquireMachinesLock for bridge-281000: {Name:mk9e50f97d2386769ca02400541d9347efa17b63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 11:00:03.917244    9349 start.go:364] duration metric: took 320.375µs to acquireMachinesLock for "bridge-281000"
	I0729 11:00:03.917353    9349 start.go:93] Provisioning new machine with config: &{Name:bridge-281000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:bridge-281000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 11:00:03.917505    9349 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 11:00:03.925794    9349 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 11:00:03.966468    9349 start.go:159] libmachine.API.Create for "bridge-281000" (driver="qemu2")
	I0729 11:00:03.966510    9349 client.go:168] LocalClient.Create starting
	I0729 11:00:03.966625    9349 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/ca.pem
	I0729 11:00:03.966686    9349 main.go:141] libmachine: Decoding PEM data...
	I0729 11:00:03.966702    9349 main.go:141] libmachine: Parsing certificate...
	I0729 11:00:03.966760    9349 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/cert.pem
	I0729 11:00:03.966807    9349 main.go:141] libmachine: Decoding PEM data...
	I0729 11:00:03.966818    9349 main.go:141] libmachine: Parsing certificate...
	I0729 11:00:03.967303    9349 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19339-6071/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0729 11:00:04.123211    9349 main.go:141] libmachine: Creating SSH key...
	I0729 11:00:04.309749    9349 main.go:141] libmachine: Creating Disk image...
	I0729 11:00:04.309765    9349 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 11:00:04.310004    9349 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/bridge-281000/disk.qcow2.raw /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/bridge-281000/disk.qcow2
	I0729 11:00:04.319614    9349 main.go:141] libmachine: STDOUT: 
	I0729 11:00:04.319637    9349 main.go:141] libmachine: STDERR: 
	I0729 11:00:04.319682    9349 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/bridge-281000/disk.qcow2 +20000M
	I0729 11:00:04.327579    9349 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 11:00:04.327592    9349 main.go:141] libmachine: STDERR: 
	I0729 11:00:04.327606    9349 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/bridge-281000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/bridge-281000/disk.qcow2
	I0729 11:00:04.327611    9349 main.go:141] libmachine: Starting QEMU VM...
	I0729 11:00:04.327623    9349 qemu.go:418] Using hvf for hardware acceleration
	I0729 11:00:04.327653    9349 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/bridge-281000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/bridge-281000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/bridge-281000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:d2:25:5c:c7:0a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/bridge-281000/disk.qcow2
	I0729 11:00:04.329268    9349 main.go:141] libmachine: STDOUT: 
	I0729 11:00:04.329283    9349 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 11:00:04.329296    9349 client.go:171] duration metric: took 362.785125ms to LocalClient.Create
	I0729 11:00:06.331350    9349 start.go:128] duration metric: took 2.413864375s to createHost
	I0729 11:00:06.331372    9349 start.go:83] releasing machines lock for "bridge-281000", held for 2.414157959s
	W0729 11:00:06.331523    9349 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-281000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-281000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 11:00:06.340777    9349 out.go:177] 
	W0729 11:00:06.348801    9349 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 11:00:06.348816    9349 out.go:239] * 
	* 
	W0729 11:00:06.349391    9349 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 11:00:06.361780    9349 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-281000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-281000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.775036334s)

                                                
                                                
-- stdout --
	* [kubenet-281000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19339
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-281000" primary control-plane node in "kubenet-281000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-281000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 11:00:08.530456    9715 out.go:291] Setting OutFile to fd 1 ...
	I0729 11:00:08.530592    9715 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:00:08.530595    9715 out.go:304] Setting ErrFile to fd 2...
	I0729 11:00:08.530598    9715 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:00:08.530728    9715 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 11:00:08.531804    9715 out.go:298] Setting JSON to false
	I0729 11:00:08.547816    9715 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5377,"bootTime":1722270631,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 11:00:08.547889    9715 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 11:00:08.553059    9715 out.go:177] * [kubenet-281000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 11:00:08.561065    9715 out.go:177]   - MINIKUBE_LOCATION=19339
	I0729 11:00:08.561126    9715 notify.go:220] Checking for updates...
	I0729 11:00:08.567998    9715 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	I0729 11:00:08.571080    9715 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 11:00:08.574038    9715 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 11:00:08.577062    9715 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	I0729 11:00:08.580054    9715 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 11:00:08.583368    9715 config.go:182] Loaded profile config "multinode-263000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 11:00:08.583440    9715 config.go:182] Loaded profile config "stopped-upgrade-294000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 11:00:08.583487    9715 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 11:00:08.587974    9715 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 11:00:08.594901    9715 start.go:297] selected driver: qemu2
	I0729 11:00:08.594906    9715 start.go:901] validating driver "qemu2" against <nil>
	I0729 11:00:08.594912    9715 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 11:00:08.597308    9715 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 11:00:08.604957    9715 out.go:177] * Automatically selected the socket_vmnet network
	I0729 11:00:08.608083    9715 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 11:00:08.608118    9715 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0729 11:00:08.608150    9715 start.go:340] cluster config:
	{Name:kubenet-281000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:kubenet-281000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:00:08.611853    9715 iso.go:125] acquiring lock: {Name:mk2808e0b9510c77af2c0862d3450f3cc996acba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 11:00:08.615855    9715 out.go:177] * Starting "kubenet-281000" primary control-plane node in "kubenet-281000" cluster
	I0729 11:00:08.624046    9715 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 11:00:08.624077    9715 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 11:00:08.624093    9715 cache.go:56] Caching tarball of preloaded images
	I0729 11:00:08.624158    9715 preload.go:172] Found /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 11:00:08.624164    9715 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 11:00:08.624233    9715 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/kubenet-281000/config.json ...
	I0729 11:00:08.624243    9715 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/kubenet-281000/config.json: {Name:mkf5299b455c0d9b3d529767b77d51f4a138ce31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:00:08.624631    9715 start.go:360] acquireMachinesLock for kubenet-281000: {Name:mk9e50f97d2386769ca02400541d9347efa17b63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 11:00:08.624662    9715 start.go:364] duration metric: took 25.666µs to acquireMachinesLock for "kubenet-281000"
	I0729 11:00:08.624673    9715 start.go:93] Provisioning new machine with config: &{Name:kubenet-281000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kubenet-281000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 11:00:08.624698    9715 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 11:00:08.629064    9715 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 11:00:08.645118    9715 start.go:159] libmachine.API.Create for "kubenet-281000" (driver="qemu2")
	I0729 11:00:08.645141    9715 client.go:168] LocalClient.Create starting
	I0729 11:00:08.645206    9715 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/ca.pem
	I0729 11:00:08.645236    9715 main.go:141] libmachine: Decoding PEM data...
	I0729 11:00:08.645248    9715 main.go:141] libmachine: Parsing certificate...
	I0729 11:00:08.645284    9715 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/cert.pem
	I0729 11:00:08.645306    9715 main.go:141] libmachine: Decoding PEM data...
	I0729 11:00:08.645313    9715 main.go:141] libmachine: Parsing certificate...
	I0729 11:00:08.645693    9715 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19339-6071/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0729 11:00:08.794037    9715 main.go:141] libmachine: Creating SSH key...
	I0729 11:00:08.855429    9715 main.go:141] libmachine: Creating Disk image...
	I0729 11:00:08.855435    9715 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 11:00:08.855660    9715 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/kubenet-281000/disk.qcow2.raw /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/kubenet-281000/disk.qcow2
	I0729 11:00:08.864811    9715 main.go:141] libmachine: STDOUT: 
	I0729 11:00:08.864839    9715 main.go:141] libmachine: STDERR: 
	I0729 11:00:08.864895    9715 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/kubenet-281000/disk.qcow2 +20000M
	I0729 11:00:08.872914    9715 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 11:00:08.872927    9715 main.go:141] libmachine: STDERR: 
	I0729 11:00:08.872948    9715 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/kubenet-281000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/kubenet-281000/disk.qcow2
	I0729 11:00:08.872953    9715 main.go:141] libmachine: Starting QEMU VM...
	I0729 11:00:08.872966    9715 qemu.go:418] Using hvf for hardware acceleration
	I0729 11:00:08.872994    9715 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/kubenet-281000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/kubenet-281000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/kubenet-281000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:4d:e3:e8:57:42 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/kubenet-281000/disk.qcow2
	I0729 11:00:08.874675    9715 main.go:141] libmachine: STDOUT: 
	I0729 11:00:08.874688    9715 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 11:00:08.874712    9715 client.go:171] duration metric: took 229.571667ms to LocalClient.Create
	I0729 11:00:10.876880    9715 start.go:128] duration metric: took 2.252186667s to createHost
	I0729 11:00:10.876980    9715 start.go:83] releasing machines lock for "kubenet-281000", held for 2.252345375s
	W0729 11:00:10.877038    9715 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 11:00:10.890291    9715 out.go:177] * Deleting "kubenet-281000" in qemu2 ...
	W0729 11:00:10.916292    9715 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 11:00:10.916326    9715 start.go:729] Will try again in 5 seconds ...
	I0729 11:00:15.918426    9715 start.go:360] acquireMachinesLock for kubenet-281000: {Name:mk9e50f97d2386769ca02400541d9347efa17b63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 11:00:15.919006    9715 start.go:364] duration metric: took 486.209µs to acquireMachinesLock for "kubenet-281000"
	I0729 11:00:15.919078    9715 start.go:93] Provisioning new machine with config: &{Name:kubenet-281000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kubenet-281000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 11:00:15.919386    9715 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 11:00:15.925110    9715 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 11:00:15.975415    9715 start.go:159] libmachine.API.Create for "kubenet-281000" (driver="qemu2")
	I0729 11:00:15.975467    9715 client.go:168] LocalClient.Create starting
	I0729 11:00:15.975607    9715 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/ca.pem
	I0729 11:00:15.975672    9715 main.go:141] libmachine: Decoding PEM data...
	I0729 11:00:15.975688    9715 main.go:141] libmachine: Parsing certificate...
	I0729 11:00:15.975761    9715 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/cert.pem
	I0729 11:00:15.975808    9715 main.go:141] libmachine: Decoding PEM data...
	I0729 11:00:15.975819    9715 main.go:141] libmachine: Parsing certificate...
	I0729 11:00:15.976346    9715 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19339-6071/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0729 11:00:16.132817    9715 main.go:141] libmachine: Creating SSH key...
	I0729 11:00:16.218471    9715 main.go:141] libmachine: Creating Disk image...
	I0729 11:00:16.218478    9715 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 11:00:16.218724    9715 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/kubenet-281000/disk.qcow2.raw /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/kubenet-281000/disk.qcow2
	I0729 11:00:16.228199    9715 main.go:141] libmachine: STDOUT: 
	I0729 11:00:16.228218    9715 main.go:141] libmachine: STDERR: 
	I0729 11:00:16.228267    9715 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/kubenet-281000/disk.qcow2 +20000M
	I0729 11:00:16.236412    9715 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 11:00:16.236427    9715 main.go:141] libmachine: STDERR: 
	I0729 11:00:16.236449    9715 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/kubenet-281000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/kubenet-281000/disk.qcow2
	I0729 11:00:16.236453    9715 main.go:141] libmachine: Starting QEMU VM...
	I0729 11:00:16.236467    9715 qemu.go:418] Using hvf for hardware acceleration
	I0729 11:00:16.236497    9715 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/kubenet-281000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/kubenet-281000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/kubenet-281000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:cd:48:17:7d:8d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/kubenet-281000/disk.qcow2
	I0729 11:00:16.238173    9715 main.go:141] libmachine: STDOUT: 
	I0729 11:00:16.238190    9715 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 11:00:16.238203    9715 client.go:171] duration metric: took 262.733875ms to LocalClient.Create
	I0729 11:00:18.240270    9715 start.go:128] duration metric: took 2.320899708s to createHost
	I0729 11:00:18.240325    9715 start.go:83] releasing machines lock for "kubenet-281000", held for 2.321332625s
	W0729 11:00:18.240453    9715 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-281000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-281000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 11:00:18.250704    9715 out.go:177] 
	W0729 11:00:18.256750    9715 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 11:00:18.256758    9715 out.go:239] * 
	* 
	W0729 11:00:18.257384    9715 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 11:00:18.267695    9715 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.78s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-178000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-178000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.875772584s)

                                                
                                                
-- stdout --
	* [old-k8s-version-178000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19339
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-178000" primary control-plane node in "old-k8s-version-178000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-178000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 11:00:20.400477    9824 out.go:291] Setting OutFile to fd 1 ...
	I0729 11:00:20.400608    9824 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:00:20.400611    9824 out.go:304] Setting ErrFile to fd 2...
	I0729 11:00:20.400613    9824 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:00:20.400739    9824 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 11:00:20.401813    9824 out.go:298] Setting JSON to false
	I0729 11:00:20.417800    9824 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5389,"bootTime":1722270631,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 11:00:20.417883    9824 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 11:00:20.423077    9824 out.go:177] * [old-k8s-version-178000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 11:00:20.430211    9824 out.go:177]   - MINIKUBE_LOCATION=19339
	I0729 11:00:20.430280    9824 notify.go:220] Checking for updates...
	I0729 11:00:20.437195    9824 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	I0729 11:00:20.440209    9824 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 11:00:20.444084    9824 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 11:00:20.447161    9824 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	I0729 11:00:20.450144    9824 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 11:00:20.453488    9824 config.go:182] Loaded profile config "multinode-263000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 11:00:20.453560    9824 config.go:182] Loaded profile config "stopped-upgrade-294000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 11:00:20.453605    9824 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 11:00:20.457173    9824 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 11:00:20.464181    9824 start.go:297] selected driver: qemu2
	I0729 11:00:20.464190    9824 start.go:901] validating driver "qemu2" against <nil>
	I0729 11:00:20.464197    9824 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 11:00:20.466419    9824 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 11:00:20.469170    9824 out.go:177] * Automatically selected the socket_vmnet network
	I0729 11:00:20.472246    9824 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 11:00:20.472263    9824 cni.go:84] Creating CNI manager for ""
	I0729 11:00:20.472272    9824 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0729 11:00:20.472300    9824 start.go:340] cluster config:
	{Name:old-k8s-version-178000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-178000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:00:20.475662    9824 iso.go:125] acquiring lock: {Name:mk2808e0b9510c77af2c0862d3450f3cc996acba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 11:00:20.484184    9824 out.go:177] * Starting "old-k8s-version-178000" primary control-plane node in "old-k8s-version-178000" cluster
	I0729 11:00:20.487978    9824 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 11:00:20.487993    9824 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0729 11:00:20.488003    9824 cache.go:56] Caching tarball of preloaded images
	I0729 11:00:20.488073    9824 preload.go:172] Found /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 11:00:20.488078    9824 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0729 11:00:20.488135    9824 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/old-k8s-version-178000/config.json ...
	I0729 11:00:20.488144    9824 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/old-k8s-version-178000/config.json: {Name:mk2f90ca4e0b0c98b4f0d4effe03646c6a684042 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:00:20.488363    9824 start.go:360] acquireMachinesLock for old-k8s-version-178000: {Name:mk9e50f97d2386769ca02400541d9347efa17b63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 11:00:20.488395    9824 start.go:364] duration metric: took 23.708µs to acquireMachinesLock for "old-k8s-version-178000"
	I0729 11:00:20.488406    9824 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-178000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-178000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 11:00:20.488431    9824 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 11:00:20.496191    9824 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 11:00:20.511284    9824 start.go:159] libmachine.API.Create for "old-k8s-version-178000" (driver="qemu2")
	I0729 11:00:20.511313    9824 client.go:168] LocalClient.Create starting
	I0729 11:00:20.511372    9824 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/ca.pem
	I0729 11:00:20.511409    9824 main.go:141] libmachine: Decoding PEM data...
	I0729 11:00:20.511418    9824 main.go:141] libmachine: Parsing certificate...
	I0729 11:00:20.511455    9824 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/cert.pem
	I0729 11:00:20.511477    9824 main.go:141] libmachine: Decoding PEM data...
	I0729 11:00:20.511488    9824 main.go:141] libmachine: Parsing certificate...
	I0729 11:00:20.511829    9824 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19339-6071/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0729 11:00:20.658401    9824 main.go:141] libmachine: Creating SSH key...
	I0729 11:00:20.826982    9824 main.go:141] libmachine: Creating Disk image...
	I0729 11:00:20.826991    9824 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 11:00:20.827228    9824 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/old-k8s-version-178000/disk.qcow2.raw /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/old-k8s-version-178000/disk.qcow2
	I0729 11:00:20.836964    9824 main.go:141] libmachine: STDOUT: 
	I0729 11:00:20.836995    9824 main.go:141] libmachine: STDERR: 
	I0729 11:00:20.837048    9824 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/old-k8s-version-178000/disk.qcow2 +20000M
	I0729 11:00:20.844994    9824 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 11:00:20.845011    9824 main.go:141] libmachine: STDERR: 
	I0729 11:00:20.845022    9824 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/old-k8s-version-178000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/old-k8s-version-178000/disk.qcow2
	I0729 11:00:20.845028    9824 main.go:141] libmachine: Starting QEMU VM...
	I0729 11:00:20.845045    9824 qemu.go:418] Using hvf for hardware acceleration
	I0729 11:00:20.845069    9824 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/old-k8s-version-178000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/old-k8s-version-178000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/old-k8s-version-178000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:c0:72:ba:30:2f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/old-k8s-version-178000/disk.qcow2
	I0729 11:00:20.846753    9824 main.go:141] libmachine: STDOUT: 
	I0729 11:00:20.846771    9824 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 11:00:20.846790    9824 client.go:171] duration metric: took 335.478458ms to LocalClient.Create
	I0729 11:00:22.848959    9824 start.go:128] duration metric: took 2.360537458s to createHost
	I0729 11:00:22.849041    9824 start.go:83] releasing machines lock for "old-k8s-version-178000", held for 2.36067525s
	W0729 11:00:22.849181    9824 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 11:00:22.864493    9824 out.go:177] * Deleting "old-k8s-version-178000" in qemu2 ...
	W0729 11:00:22.889769    9824 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 11:00:22.889802    9824 start.go:729] Will try again in 5 seconds ...
	I0729 11:00:27.891938    9824 start.go:360] acquireMachinesLock for old-k8s-version-178000: {Name:mk9e50f97d2386769ca02400541d9347efa17b63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 11:00:27.892633    9824 start.go:364] duration metric: took 598.708µs to acquireMachinesLock for "old-k8s-version-178000"
	I0729 11:00:27.892766    9824 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-178000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-178000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 11:00:27.893041    9824 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 11:00:27.899884    9824 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 11:00:27.948177    9824 start.go:159] libmachine.API.Create for "old-k8s-version-178000" (driver="qemu2")
	I0729 11:00:27.948223    9824 client.go:168] LocalClient.Create starting
	I0729 11:00:27.948348    9824 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/ca.pem
	I0729 11:00:27.948427    9824 main.go:141] libmachine: Decoding PEM data...
	I0729 11:00:27.948444    9824 main.go:141] libmachine: Parsing certificate...
	I0729 11:00:27.948518    9824 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/cert.pem
	I0729 11:00:27.948564    9824 main.go:141] libmachine: Decoding PEM data...
	I0729 11:00:27.948578    9824 main.go:141] libmachine: Parsing certificate...
	I0729 11:00:27.949420    9824 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19339-6071/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0729 11:00:28.107793    9824 main.go:141] libmachine: Creating SSH key...
	I0729 11:00:28.186933    9824 main.go:141] libmachine: Creating Disk image...
	I0729 11:00:28.186939    9824 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 11:00:28.187173    9824 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/old-k8s-version-178000/disk.qcow2.raw /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/old-k8s-version-178000/disk.qcow2
	I0729 11:00:28.196578    9824 main.go:141] libmachine: STDOUT: 
	I0729 11:00:28.196599    9824 main.go:141] libmachine: STDERR: 
	I0729 11:00:28.196658    9824 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/old-k8s-version-178000/disk.qcow2 +20000M
	I0729 11:00:28.204669    9824 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 11:00:28.204686    9824 main.go:141] libmachine: STDERR: 
	I0729 11:00:28.204703    9824 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/old-k8s-version-178000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/old-k8s-version-178000/disk.qcow2
	I0729 11:00:28.204707    9824 main.go:141] libmachine: Starting QEMU VM...
	I0729 11:00:28.204719    9824 qemu.go:418] Using hvf for hardware acceleration
	I0729 11:00:28.204745    9824 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/old-k8s-version-178000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/old-k8s-version-178000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/old-k8s-version-178000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:a5:4c:4e:b1:2e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/old-k8s-version-178000/disk.qcow2
	I0729 11:00:28.206475    9824 main.go:141] libmachine: STDOUT: 
	I0729 11:00:28.206494    9824 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 11:00:28.206507    9824 client.go:171] duration metric: took 258.28375ms to LocalClient.Create
	I0729 11:00:30.208587    9824 start.go:128] duration metric: took 2.315560459s to createHost
	I0729 11:00:30.208629    9824 start.go:83] releasing machines lock for "old-k8s-version-178000", held for 2.316014459s
	W0729 11:00:30.208767    9824 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-178000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-178000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 11:00:30.218286    9824 out.go:177] 
	W0729 11:00:30.225277    9824 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 11:00:30.225283    9824 out.go:239] * 
	* 
	W0729 11:00:30.225820    9824 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 11:00:30.240203    9824 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-178000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-178000 -n old-k8s-version-178000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-178000 -n old-k8s-version-178000: exit status 7 (30.497959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-178000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.91s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-178000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-178000 create -f testdata/busybox.yaml: exit status 1 (29.138291ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-178000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-178000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-178000 -n old-k8s-version-178000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-178000 -n old-k8s-version-178000: exit status 7 (32.417292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-178000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-178000 -n old-k8s-version-178000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-178000 -n old-k8s-version-178000: exit status 7 (31.826583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-178000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-178000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-178000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-178000 describe deploy/metrics-server -n kube-system: exit status 1 (29.915167ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-178000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-178000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-178000 -n old-k8s-version-178000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-178000 -n old-k8s-version-178000: exit status 7 (29.977375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-178000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-178000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-178000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.173827208s)

                                                
                                                
-- stdout --
	* [old-k8s-version-178000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19339
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-178000" primary control-plane node in "old-k8s-version-178000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-178000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-178000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 11:00:33.764866    9880 out.go:291] Setting OutFile to fd 1 ...
	I0729 11:00:33.764985    9880 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:00:33.764988    9880 out.go:304] Setting ErrFile to fd 2...
	I0729 11:00:33.764991    9880 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:00:33.765117    9880 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 11:00:33.766184    9880 out.go:298] Setting JSON to false
	I0729 11:00:33.782370    9880 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5402,"bootTime":1722270631,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 11:00:33.782464    9880 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 11:00:33.786492    9880 out.go:177] * [old-k8s-version-178000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 11:00:33.792310    9880 out.go:177]   - MINIKUBE_LOCATION=19339
	I0729 11:00:33.792369    9880 notify.go:220] Checking for updates...
	I0729 11:00:33.800252    9880 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	I0729 11:00:33.803319    9880 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 11:00:33.806308    9880 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 11:00:33.809280    9880 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	I0729 11:00:33.812293    9880 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 11:00:33.815536    9880 config.go:182] Loaded profile config "old-k8s-version-178000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0729 11:00:33.816828    9880 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 11:00:33.819307    9880 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 11:00:33.823354    9880 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 11:00:33.830314    9880 start.go:297] selected driver: qemu2
	I0729 11:00:33.830324    9880 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-178000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-178000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:00:33.830428    9880 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 11:00:33.832724    9880 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 11:00:33.832768    9880 cni.go:84] Creating CNI manager for ""
	I0729 11:00:33.832774    9880 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0729 11:00:33.832796    9880 start.go:340] cluster config:
	{Name:old-k8s-version-178000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-178000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:00:33.836227    9880 iso.go:125] acquiring lock: {Name:mk2808e0b9510c77af2c0862d3450f3cc996acba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 11:00:33.844202    9880 out.go:177] * Starting "old-k8s-version-178000" primary control-plane node in "old-k8s-version-178000" cluster
	I0729 11:00:33.848359    9880 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 11:00:33.848374    9880 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0729 11:00:33.848386    9880 cache.go:56] Caching tarball of preloaded images
	I0729 11:00:33.848444    9880 preload.go:172] Found /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 11:00:33.848449    9880 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0729 11:00:33.848513    9880 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/old-k8s-version-178000/config.json ...
	I0729 11:00:33.849045    9880 start.go:360] acquireMachinesLock for old-k8s-version-178000: {Name:mk9e50f97d2386769ca02400541d9347efa17b63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 11:00:33.849077    9880 start.go:364] duration metric: took 25.959µs to acquireMachinesLock for "old-k8s-version-178000"
	I0729 11:00:33.849087    9880 start.go:96] Skipping create...Using existing machine configuration
	I0729 11:00:33.849092    9880 fix.go:54] fixHost starting: 
	I0729 11:00:33.849201    9880 fix.go:112] recreateIfNeeded on old-k8s-version-178000: state=Stopped err=<nil>
	W0729 11:00:33.849209    9880 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 11:00:33.852284    9880 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-178000" ...
	I0729 11:00:33.860126    9880 qemu.go:418] Using hvf for hardware acceleration
	I0729 11:00:33.860175    9880 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/old-k8s-version-178000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/old-k8s-version-178000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/old-k8s-version-178000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:a5:4c:4e:b1:2e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/old-k8s-version-178000/disk.qcow2
	I0729 11:00:33.862156    9880 main.go:141] libmachine: STDOUT: 
	I0729 11:00:33.862182    9880 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 11:00:33.862209    9880 fix.go:56] duration metric: took 13.116792ms for fixHost
	I0729 11:00:33.862213    9880 start.go:83] releasing machines lock for "old-k8s-version-178000", held for 13.131833ms
	W0729 11:00:33.862220    9880 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 11:00:33.862245    9880 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 11:00:33.862249    9880 start.go:729] Will try again in 5 seconds ...
	I0729 11:00:38.864316    9880 start.go:360] acquireMachinesLock for old-k8s-version-178000: {Name:mk9e50f97d2386769ca02400541d9347efa17b63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 11:00:38.864529    9880 start.go:364] duration metric: took 160.083µs to acquireMachinesLock for "old-k8s-version-178000"
	I0729 11:00:38.864590    9880 start.go:96] Skipping create...Using existing machine configuration
	I0729 11:00:38.864596    9880 fix.go:54] fixHost starting: 
	I0729 11:00:38.864849    9880 fix.go:112] recreateIfNeeded on old-k8s-version-178000: state=Stopped err=<nil>
	W0729 11:00:38.864857    9880 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 11:00:38.874076    9880 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-178000" ...
	I0729 11:00:38.878050    9880 qemu.go:418] Using hvf for hardware acceleration
	I0729 11:00:38.878116    9880 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/old-k8s-version-178000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/old-k8s-version-178000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/old-k8s-version-178000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:a5:4c:4e:b1:2e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/old-k8s-version-178000/disk.qcow2
	I0729 11:00:38.880756    9880 main.go:141] libmachine: STDOUT: 
	I0729 11:00:38.880774    9880 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 11:00:38.880796    9880 fix.go:56] duration metric: took 16.201167ms for fixHost
	I0729 11:00:38.880800    9880 start.go:83] releasing machines lock for "old-k8s-version-178000", held for 16.262708ms
	W0729 11:00:38.880869    9880 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-178000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-178000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 11:00:38.889077    9880 out.go:177] 
	W0729 11:00:38.893037    9880 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 11:00:38.893046    9880 out.go:239] * 
	* 
	W0729 11:00:38.893573    9880 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 11:00:38.904158    9880 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-178000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-178000 -n old-k8s-version-178000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-178000 -n old-k8s-version-178000: exit status 7 (29.734708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-178000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-178000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-178000 -n old-k8s-version-178000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-178000 -n old-k8s-version-178000: exit status 7 (30.353584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-178000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-178000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-178000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-178000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.520792ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-178000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-178000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-178000 -n old-k8s-version-178000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-178000 -n old-k8s-version-178000: exit status 7 (30.182542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-178000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-178000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-178000 -n old-k8s-version-178000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-178000 -n old-k8s-version-178000: exit status 7 (28.788042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-178000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-178000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-178000 --alsologtostderr -v=1: exit status 83 (42.727375ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-178000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-178000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 11:00:39.127039    9899 out.go:291] Setting OutFile to fd 1 ...
	I0729 11:00:39.128087    9899 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:00:39.128093    9899 out.go:304] Setting ErrFile to fd 2...
	I0729 11:00:39.128096    9899 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:00:39.128224    9899 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 11:00:39.128441    9899 out.go:298] Setting JSON to false
	I0729 11:00:39.128450    9899 mustload.go:65] Loading cluster: old-k8s-version-178000
	I0729 11:00:39.128632    9899 config.go:182] Loaded profile config "old-k8s-version-178000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0729 11:00:39.132328    9899 out.go:177] * The control-plane node old-k8s-version-178000 host is not running: state=Stopped
	I0729 11:00:39.136289    9899 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-178000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-178000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-178000 -n old-k8s-version-178000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-178000 -n old-k8s-version-178000: exit status 7 (28.819916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-178000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-178000 -n old-k8s-version-178000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-178000 -n old-k8s-version-178000: exit status 7 (29.820208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-178000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-878000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-878000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (9.885845792s)

                                                
                                                
-- stdout --
	* [no-preload-878000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19339
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-878000" primary control-plane node in "no-preload-878000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-878000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 11:00:39.444549    9916 out.go:291] Setting OutFile to fd 1 ...
	I0729 11:00:39.444670    9916 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:00:39.444673    9916 out.go:304] Setting ErrFile to fd 2...
	I0729 11:00:39.444675    9916 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:00:39.444831    9916 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 11:00:39.445946    9916 out.go:298] Setting JSON to false
	I0729 11:00:39.462763    9916 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5408,"bootTime":1722270631,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 11:00:39.462852    9916 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 11:00:39.466081    9916 out.go:177] * [no-preload-878000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 11:00:39.473130    9916 out.go:177]   - MINIKUBE_LOCATION=19339
	I0729 11:00:39.473264    9916 notify.go:220] Checking for updates...
	I0729 11:00:39.478998    9916 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	I0729 11:00:39.482098    9916 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 11:00:39.484972    9916 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 11:00:39.488061    9916 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	I0729 11:00:39.491038    9916 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 11:00:39.492760    9916 config.go:182] Loaded profile config "multinode-263000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 11:00:39.492823    9916 config.go:182] Loaded profile config "stopped-upgrade-294000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 11:00:39.492873    9916 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 11:00:39.496015    9916 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 11:00:39.502924    9916 start.go:297] selected driver: qemu2
	I0729 11:00:39.502936    9916 start.go:901] validating driver "qemu2" against <nil>
	I0729 11:00:39.502944    9916 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 11:00:39.505272    9916 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 11:00:39.508025    9916 out.go:177] * Automatically selected the socket_vmnet network
	I0729 11:00:39.512176    9916 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 11:00:39.512216    9916 cni.go:84] Creating CNI manager for ""
	I0729 11:00:39.512226    9916 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 11:00:39.512232    9916 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 11:00:39.512269    9916 start.go:340] cluster config:
	{Name:no-preload-878000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-878000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vm
net/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:00:39.515890    9916 iso.go:125] acquiring lock: {Name:mk2808e0b9510c77af2c0862d3450f3cc996acba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 11:00:39.523993    9916 out.go:177] * Starting "no-preload-878000" primary control-plane node in "no-preload-878000" cluster
	I0729 11:00:39.527994    9916 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0729 11:00:39.528063    9916 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/no-preload-878000/config.json ...
	I0729 11:00:39.528080    9916 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/no-preload-878000/config.json: {Name:mkea26f6ced67917e63f90ec5aadcfab671ad8bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:00:39.528087    9916 cache.go:107] acquiring lock: {Name:mk999e4e69584c4a64cb49ec9e99877f268d7913 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 11:00:39.528098    9916 cache.go:107] acquiring lock: {Name:mk1e14473e57849268c3928f20bbe65ba5d64a74 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 11:00:39.528122    9916 cache.go:107] acquiring lock: {Name:mk9a28a9832f7383c9c1f4e6800474fd99ce7216 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 11:00:39.528174    9916 cache.go:107] acquiring lock: {Name:mkbba392f86182b40496a89e01a5f59882aae220 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 11:00:39.528215    9916 cache.go:107] acquiring lock: {Name:mka8feaf2359bd31646395d57f0a4728788e453f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 11:00:39.528253    9916 cache.go:107] acquiring lock: {Name:mk35e17b78045ca4c4b30e88afc14065ebd61bfb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 11:00:39.528323    9916 cache.go:107] acquiring lock: {Name:mk24e73381f57ed11af5214a654d57bc779b1a6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 11:00:39.528333    9916 cache.go:107] acquiring lock: {Name:mk989ec7345c3af9dcce35ec8c16f46f993f77d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 11:00:39.528582    9916 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 11:00:39.528582    9916 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0729 11:00:39.528592    9916 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 11:00:39.528582    9916 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 11:00:39.528584    9916 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0729 11:00:39.528617    9916 cache.go:115] /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0729 11:00:39.528726    9916 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 640.292µs
	I0729 11:00:39.528736    9916 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0729 11:00:39.528752    9916 start.go:360] acquireMachinesLock for no-preload-878000: {Name:mk9e50f97d2386769ca02400541d9347efa17b63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 11:00:39.528756    9916 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 11:00:39.528784    9916 start.go:364] duration metric: took 25.292µs to acquireMachinesLock for "no-preload-878000"
	I0729 11:00:39.528795    9916 start.go:93] Provisioning new machine with config: &{Name:no-preload-878000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-878000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 11:00:39.528826    9916 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 11:00:39.528854    9916 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 11:00:39.534995    9916 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 11:00:39.538710    9916 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0729 11:00:39.538736    9916 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 11:00:39.538835    9916 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 11:00:39.539363    9916 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0729 11:00:39.541137    9916 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 11:00:39.541152    9916 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 11:00:39.541183    9916 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 11:00:39.551582    9916 start.go:159] libmachine.API.Create for "no-preload-878000" (driver="qemu2")
	I0729 11:00:39.551609    9916 client.go:168] LocalClient.Create starting
	I0729 11:00:39.551728    9916 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/ca.pem
	I0729 11:00:39.551758    9916 main.go:141] libmachine: Decoding PEM data...
	I0729 11:00:39.551779    9916 main.go:141] libmachine: Parsing certificate...
	I0729 11:00:39.551818    9916 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/cert.pem
	I0729 11:00:39.551845    9916 main.go:141] libmachine: Decoding PEM data...
	I0729 11:00:39.551858    9916 main.go:141] libmachine: Parsing certificate...
	I0729 11:00:39.552224    9916 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19339-6071/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0729 11:00:39.721093    9916 main.go:141] libmachine: Creating SSH key...
	I0729 11:00:39.900384    9916 main.go:141] libmachine: Creating Disk image...
	I0729 11:00:39.900399    9916 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 11:00:39.900641    9916 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/no-preload-878000/disk.qcow2.raw /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/no-preload-878000/disk.qcow2
	I0729 11:00:39.910382    9916 main.go:141] libmachine: STDOUT: 
	I0729 11:00:39.910402    9916 main.go:141] libmachine: STDERR: 
	I0729 11:00:39.910456    9916 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/no-preload-878000/disk.qcow2 +20000M
	I0729 11:00:39.918612    9916 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 11:00:39.918626    9916 main.go:141] libmachine: STDERR: 
	I0729 11:00:39.918637    9916 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/no-preload-878000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/no-preload-878000/disk.qcow2
	I0729 11:00:39.918641    9916 main.go:141] libmachine: Starting QEMU VM...
	I0729 11:00:39.918657    9916 qemu.go:418] Using hvf for hardware acceleration
	I0729 11:00:39.918680    9916 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/no-preload-878000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/no-preload-878000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/no-preload-878000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:f6:69:76:f7:91 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/no-preload-878000/disk.qcow2
	I0729 11:00:39.920391    9916 main.go:141] libmachine: STDOUT: 
	I0729 11:00:39.920404    9916 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 11:00:39.920420    9916 client.go:171] duration metric: took 368.813458ms to LocalClient.Create
	I0729 11:00:39.937219    9916 cache.go:162] opening:  /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I0729 11:00:39.941080    9916 cache.go:162] opening:  /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0729 11:00:39.947324    9916 cache.go:162] opening:  /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0729 11:00:39.951973    9916 cache.go:162] opening:  /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0
	I0729 11:00:39.988966    9916 cache.go:162] opening:  /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0729 11:00:39.991675    9916 cache.go:162] opening:  /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0729 11:00:40.034643    9916 cache.go:162] opening:  /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0729 11:00:40.136588    9916 cache.go:157] /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0729 11:00:40.136612    9916 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 608.452042ms
	I0729 11:00:40.136625    9916 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0729 11:00:41.920550    9916 start.go:128] duration metric: took 2.391723041s to createHost
	I0729 11:00:41.920584    9916 start.go:83] releasing machines lock for "no-preload-878000", held for 2.391834542s
	W0729 11:00:41.920608    9916 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 11:00:41.926666    9916 out.go:177] * Deleting "no-preload-878000" in qemu2 ...
	W0729 11:00:41.945851    9916 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 11:00:41.945866    9916 start.go:729] Will try again in 5 seconds ...
	I0729 11:00:42.771928    9916 cache.go:157] /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 exists
	I0729 11:00:42.771981    9916 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0" took 3.243912542s
	I0729 11:00:42.771998    9916 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 succeeded
	I0729 11:00:42.933225    9916 cache.go:157] /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0729 11:00:42.933252    9916 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 3.405060667s
	I0729 11:00:42.933295    9916 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0729 11:00:43.398559    9916 cache.go:157] /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 exists
	I0729 11:00:43.398606    9916 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0" took 3.870450125s
	I0729 11:00:43.398620    9916 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 succeeded
	I0729 11:00:43.437313    9916 cache.go:157] /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 exists
	I0729 11:00:43.437327    9916 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0" took 3.909152083s
	I0729 11:00:43.437337    9916 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 succeeded
	I0729 11:00:43.811844    9916 cache.go:157] /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 exists
	I0729 11:00:43.811882    9916 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0" took 4.283829542s
	I0729 11:00:43.811898    9916 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 succeeded
	I0729 11:00:46.945915    9916 start.go:360] acquireMachinesLock for no-preload-878000: {Name:mk9e50f97d2386769ca02400541d9347efa17b63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 11:00:46.946016    9916 start.go:364] duration metric: took 76.041µs to acquireMachinesLock for "no-preload-878000"
	I0729 11:00:46.946027    9916 start.go:93] Provisioning new machine with config: &{Name:no-preload-878000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-878000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 11:00:46.946068    9916 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 11:00:46.954189    9916 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 11:00:46.970132    9916 start.go:159] libmachine.API.Create for "no-preload-878000" (driver="qemu2")
	I0729 11:00:46.970160    9916 client.go:168] LocalClient.Create starting
	I0729 11:00:46.970226    9916 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/ca.pem
	I0729 11:00:46.970272    9916 main.go:141] libmachine: Decoding PEM data...
	I0729 11:00:46.970284    9916 main.go:141] libmachine: Parsing certificate...
	I0729 11:00:46.970322    9916 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/cert.pem
	I0729 11:00:46.970345    9916 main.go:141] libmachine: Decoding PEM data...
	I0729 11:00:46.970353    9916 main.go:141] libmachine: Parsing certificate...
	I0729 11:00:46.970648    9916 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19339-6071/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0729 11:00:47.143656    9916 main.go:141] libmachine: Creating SSH key...
	I0729 11:00:47.236438    9916 main.go:141] libmachine: Creating Disk image...
	I0729 11:00:47.236444    9916 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 11:00:47.236670    9916 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/no-preload-878000/disk.qcow2.raw /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/no-preload-878000/disk.qcow2
	I0729 11:00:47.246516    9916 main.go:141] libmachine: STDOUT: 
	I0729 11:00:47.246534    9916 main.go:141] libmachine: STDERR: 
	I0729 11:00:47.246588    9916 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/no-preload-878000/disk.qcow2 +20000M
	I0729 11:00:47.254865    9916 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 11:00:47.254882    9916 main.go:141] libmachine: STDERR: 
	I0729 11:00:47.254897    9916 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/no-preload-878000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/no-preload-878000/disk.qcow2
	I0729 11:00:47.254900    9916 main.go:141] libmachine: Starting QEMU VM...
	I0729 11:00:47.254915    9916 qemu.go:418] Using hvf for hardware acceleration
	I0729 11:00:47.254959    9916 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/no-preload-878000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/no-preload-878000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/no-preload-878000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:a9:25:50:6e:92 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/no-preload-878000/disk.qcow2
	I0729 11:00:47.256827    9916 main.go:141] libmachine: STDOUT: 
	I0729 11:00:47.256843    9916 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 11:00:47.256857    9916 client.go:171] duration metric: took 286.698667ms to LocalClient.Create
	I0729 11:00:47.490151    9916 cache.go:157] /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 exists
	I0729 11:00:47.490182    9916 cache.go:96] cache image "registry.k8s.io/etcd:3.5.14-0" -> "/Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0" took 7.962109541s
	I0729 11:00:47.490199    9916 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.14-0 -> /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 succeeded
	I0729 11:00:47.490260    9916 cache.go:87] Successfully saved all images to host disk.
	I0729 11:00:49.259068    9916 start.go:128] duration metric: took 2.312986375s to createHost
	I0729 11:00:49.259128    9916 start.go:83] releasing machines lock for "no-preload-878000", held for 2.313142666s
	W0729 11:00:49.259396    9916 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-878000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-878000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 11:00:49.268896    9916 out.go:177] 
	W0729 11:00:49.276105    9916 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 11:00:49.276137    9916 out.go:239] * 
	* 
	W0729 11:00:49.279021    9916 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 11:00:49.287810    9916 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-878000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-878000 -n no-preload-878000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-878000 -n no-preload-878000: exit status 7 (63.962ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-878000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-878000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-878000 create -f testdata/busybox.yaml: exit status 1 (29.952333ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-878000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-878000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-878000 -n no-preload-878000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-878000 -n no-preload-878000: exit status 7 (28.476916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-878000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-878000 -n no-preload-878000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-878000 -n no-preload-878000: exit status 7 (29.508334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-878000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-878000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-878000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-878000 describe deploy/metrics-server -n kube-system: exit status 1 (27.654125ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-878000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-878000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-878000 -n no-preload-878000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-878000 -n no-preload-878000: exit status 7 (29.730708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-878000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-878000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-878000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (5.181771083s)

                                                
                                                
-- stdout --
	* [no-preload-878000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19339
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-878000" primary control-plane node in "no-preload-878000" cluster
	* Restarting existing qemu2 VM for "no-preload-878000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-878000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 11:00:53.006621    9996 out.go:291] Setting OutFile to fd 1 ...
	I0729 11:00:53.006743    9996 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:00:53.006746    9996 out.go:304] Setting ErrFile to fd 2...
	I0729 11:00:53.006748    9996 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:00:53.006876    9996 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 11:00:53.007876    9996 out.go:298] Setting JSON to false
	I0729 11:00:53.024100    9996 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5422,"bootTime":1722270631,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 11:00:53.024171    9996 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 11:00:53.028779    9996 out.go:177] * [no-preload-878000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 11:00:53.036727    9996 out.go:177]   - MINIKUBE_LOCATION=19339
	I0729 11:00:53.036775    9996 notify.go:220] Checking for updates...
	I0729 11:00:53.044525    9996 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	I0729 11:00:53.048648    9996 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 11:00:53.051674    9996 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 11:00:53.053040    9996 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	I0729 11:00:53.055719    9996 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 11:00:53.059051    9996 config.go:182] Loaded profile config "no-preload-878000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0729 11:00:53.059314    9996 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 11:00:53.063588    9996 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 11:00:53.070713    9996 start.go:297] selected driver: qemu2
	I0729 11:00:53.070719    9996 start.go:901] validating driver "qemu2" against &{Name:no-preload-878000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-878000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false Ext
raDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:00:53.070805    9996 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 11:00:53.073024    9996 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 11:00:53.073066    9996 cni.go:84] Creating CNI manager for ""
	I0729 11:00:53.073077    9996 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 11:00:53.073096    9996 start.go:340] cluster config:
	{Name:no-preload-878000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-878000 Namespace:default A
PIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:00:53.076392    9996 iso.go:125] acquiring lock: {Name:mk2808e0b9510c77af2c0862d3450f3cc996acba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 11:00:53.084680    9996 out.go:177] * Starting "no-preload-878000" primary control-plane node in "no-preload-878000" cluster
	I0729 11:00:53.088759    9996 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0729 11:00:53.088821    9996 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/no-preload-878000/config.json ...
	I0729 11:00:53.088857    9996 cache.go:107] acquiring lock: {Name:mk999e4e69584c4a64cb49ec9e99877f268d7913 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 11:00:53.088866    9996 cache.go:107] acquiring lock: {Name:mk1e14473e57849268c3928f20bbe65ba5d64a74 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 11:00:53.088923    9996 cache.go:115] /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 exists
	I0729 11:00:53.088923    9996 cache.go:115] /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0729 11:00:53.088928    9996 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0" took 71.25µs
	I0729 11:00:53.088929    9996 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 74.833µs
	I0729 11:00:53.088934    9996 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0729 11:00:53.088934    9996 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 succeeded
	I0729 11:00:53.088944    9996 cache.go:107] acquiring lock: {Name:mk989ec7345c3af9dcce35ec8c16f46f993f77d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 11:00:53.088924    9996 cache.go:107] acquiring lock: {Name:mk9a28a9832f7383c9c1f4e6800474fd99ce7216 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 11:00:53.088982    9996 cache.go:115] /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0729 11:00:53.088986    9996 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 44.791µs
	I0729 11:00:53.088989    9996 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0729 11:00:53.088998    9996 cache.go:115] /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 exists
	I0729 11:00:53.089004    9996 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0" took 92.042µs
	I0729 11:00:53.089008    9996 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 succeeded
	I0729 11:00:53.089041    9996 cache.go:107] acquiring lock: {Name:mka8feaf2359bd31646395d57f0a4728788e453f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 11:00:53.089048    9996 cache.go:107] acquiring lock: {Name:mk35e17b78045ca4c4b30e88afc14065ebd61bfb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 11:00:53.089052    9996 cache.go:107] acquiring lock: {Name:mkbba392f86182b40496a89e01a5f59882aae220 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 11:00:53.089070    9996 cache.go:107] acquiring lock: {Name:mk24e73381f57ed11af5214a654d57bc779b1a6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 11:00:53.089097    9996 cache.go:115] /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 exists
	I0729 11:00:53.089110    9996 cache.go:115] /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0729 11:00:53.089110    9996 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0" took 79.416µs
	I0729 11:00:53.089115    9996 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 succeeded
	I0729 11:00:53.089114    9996 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 178.458µs
	I0729 11:00:53.089122    9996 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0729 11:00:53.089132    9996 cache.go:115] /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 exists
	I0729 11:00:53.089138    9996 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0" took 74.167µs
	I0729 11:00:53.089142    9996 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 succeeded
	I0729 11:00:53.089213    9996 start.go:360] acquireMachinesLock for no-preload-878000: {Name:mk9e50f97d2386769ca02400541d9347efa17b63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 11:00:53.089218    9996 cache.go:115] /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 exists
	I0729 11:00:53.089223    9996 cache.go:96] cache image "registry.k8s.io/etcd:3.5.14-0" -> "/Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0" took 202.084µs
	I0729 11:00:53.089228    9996 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.14-0 -> /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 succeeded
	I0729 11:00:53.089230    9996 cache.go:87] Successfully saved all images to host disk.
	I0729 11:00:53.089241    9996 start.go:364] duration metric: took 22.792µs to acquireMachinesLock for "no-preload-878000"
	I0729 11:00:53.089250    9996 start.go:96] Skipping create...Using existing machine configuration
	I0729 11:00:53.089255    9996 fix.go:54] fixHost starting: 
	I0729 11:00:53.089359    9996 fix.go:112] recreateIfNeeded on no-preload-878000: state=Stopped err=<nil>
	W0729 11:00:53.089369    9996 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 11:00:53.097663    9996 out.go:177] * Restarting existing qemu2 VM for "no-preload-878000" ...
	I0729 11:00:53.101679    9996 qemu.go:418] Using hvf for hardware acceleration
	I0729 11:00:53.101718    9996 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/no-preload-878000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/no-preload-878000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/no-preload-878000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:a9:25:50:6e:92 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/no-preload-878000/disk.qcow2
	I0729 11:00:53.103530    9996 main.go:141] libmachine: STDOUT: 
	I0729 11:00:53.103546    9996 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 11:00:53.103570    9996 fix.go:56] duration metric: took 14.316666ms for fixHost
	I0729 11:00:53.103573    9996 start.go:83] releasing machines lock for "no-preload-878000", held for 14.329208ms
	W0729 11:00:53.103580    9996 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 11:00:53.103608    9996 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 11:00:53.103612    9996 start.go:729] Will try again in 5 seconds ...
	I0729 11:00:58.105811    9996 start.go:360] acquireMachinesLock for no-preload-878000: {Name:mk9e50f97d2386769ca02400541d9347efa17b63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 11:00:58.106431    9996 start.go:364] duration metric: took 474.417µs to acquireMachinesLock for "no-preload-878000"
	I0729 11:00:58.106532    9996 start.go:96] Skipping create...Using existing machine configuration
	I0729 11:00:58.106554    9996 fix.go:54] fixHost starting: 
	I0729 11:00:58.107295    9996 fix.go:112] recreateIfNeeded on no-preload-878000: state=Stopped err=<nil>
	W0729 11:00:58.107323    9996 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 11:00:58.110950    9996 out.go:177] * Restarting existing qemu2 VM for "no-preload-878000" ...
	I0729 11:00:58.118795    9996 qemu.go:418] Using hvf for hardware acceleration
	I0729 11:00:58.119063    9996 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/no-preload-878000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/no-preload-878000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/no-preload-878000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:a9:25:50:6e:92 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/no-preload-878000/disk.qcow2
	I0729 11:00:58.128192    9996 main.go:141] libmachine: STDOUT: 
	I0729 11:00:58.128239    9996 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 11:00:58.128308    9996 fix.go:56] duration metric: took 21.759625ms for fixHost
	I0729 11:00:58.128323    9996 start.go:83] releasing machines lock for "no-preload-878000", held for 21.868708ms
	W0729 11:00:58.128476    9996 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-878000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-878000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 11:00:58.135737    9996 out.go:177] 
	W0729 11:00:58.138851    9996 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 11:00:58.138874    9996 out.go:239] * 
	* 
	W0729 11:00:58.140861    9996 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 11:00:58.152678    9996 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-878000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-878000 -n no-preload-878000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-878000 -n no-preload-878000: exit status 7 (57.121625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-878000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-878000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-878000 -n no-preload-878000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-878000 -n no-preload-878000: exit status 7 (32.266625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-878000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-878000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-878000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-878000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.873959ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-878000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-878000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-878000 -n no-preload-878000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-878000 -n no-preload-878000: exit status 7 (29.990417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-878000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-878000 image list --format=json
start_stop_delete_test.go:304: v1.31.0-beta.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.14-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0-beta.0",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-878000 -n no-preload-878000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-878000 -n no-preload-878000: exit status 7 (28.924125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-878000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-878000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-878000 --alsologtostderr -v=1: exit status 83 (39.804ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-878000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-878000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 11:00:58.405842   10015 out.go:291] Setting OutFile to fd 1 ...
	I0729 11:00:58.405989   10015 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:00:58.405992   10015 out.go:304] Setting ErrFile to fd 2...
	I0729 11:00:58.405995   10015 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:00:58.406131   10015 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 11:00:58.406359   10015 out.go:298] Setting JSON to false
	I0729 11:00:58.406366   10015 mustload.go:65] Loading cluster: no-preload-878000
	I0729 11:00:58.406583   10015 config.go:182] Loaded profile config "no-preload-878000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0729 11:00:58.409761   10015 out.go:177] * The control-plane node no-preload-878000 host is not running: state=Stopped
	I0729 11:00:58.413734   10015 out.go:177]   To start a cluster, run: "minikube start -p no-preload-878000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-878000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-878000 -n no-preload-878000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-878000 -n no-preload-878000: exit status 7 (29.209125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-878000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-878000 -n no-preload-878000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-878000 -n no-preload-878000: exit status 7 (29.188792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-878000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (9.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-613000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-613000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (9.850839875s)

                                                
                                                
-- stdout --
	* [embed-certs-613000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19339
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-613000" primary control-plane node in "embed-certs-613000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-613000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 11:00:58.716089   10032 out.go:291] Setting OutFile to fd 1 ...
	I0729 11:00:58.716230   10032 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:00:58.716239   10032 out.go:304] Setting ErrFile to fd 2...
	I0729 11:00:58.716241   10032 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:00:58.716383   10032 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 11:00:58.717459   10032 out.go:298] Setting JSON to false
	I0729 11:00:58.733748   10032 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5427,"bootTime":1722270631,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 11:00:58.733811   10032 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 11:00:58.737591   10032 out.go:177] * [embed-certs-613000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 11:00:58.745512   10032 out.go:177]   - MINIKUBE_LOCATION=19339
	I0729 11:00:58.745596   10032 notify.go:220] Checking for updates...
	I0729 11:00:58.752570   10032 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	I0729 11:00:58.755500   10032 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 11:00:58.758508   10032 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 11:00:58.761506   10032 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	I0729 11:00:58.764479   10032 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 11:00:58.767792   10032 config.go:182] Loaded profile config "multinode-263000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 11:00:58.767851   10032 config.go:182] Loaded profile config "stopped-upgrade-294000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 11:00:58.767898   10032 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 11:00:58.772547   10032 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 11:00:58.779476   10032 start.go:297] selected driver: qemu2
	I0729 11:00:58.779484   10032 start.go:901] validating driver "qemu2" against <nil>
	I0729 11:00:58.779492   10032 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 11:00:58.781865   10032 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 11:00:58.785535   10032 out.go:177] * Automatically selected the socket_vmnet network
	I0729 11:00:58.789604   10032 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 11:00:58.789621   10032 cni.go:84] Creating CNI manager for ""
	I0729 11:00:58.789632   10032 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 11:00:58.789636   10032 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 11:00:58.789679   10032 start.go:340] cluster config:
	{Name:embed-certs-613000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-613000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:00:58.793352   10032 iso.go:125] acquiring lock: {Name:mk2808e0b9510c77af2c0862d3450f3cc996acba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 11:00:58.801336   10032 out.go:177] * Starting "embed-certs-613000" primary control-plane node in "embed-certs-613000" cluster
	I0729 11:00:58.804554   10032 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 11:00:58.804569   10032 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 11:00:58.804585   10032 cache.go:56] Caching tarball of preloaded images
	I0729 11:00:58.804649   10032 preload.go:172] Found /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 11:00:58.804657   10032 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 11:00:58.804727   10032 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/embed-certs-613000/config.json ...
	I0729 11:00:58.804744   10032 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/embed-certs-613000/config.json: {Name:mkf3f19468409e0fb1aa4684169abf87f868fa6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:00:58.805058   10032 start.go:360] acquireMachinesLock for embed-certs-613000: {Name:mk9e50f97d2386769ca02400541d9347efa17b63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 11:00:58.805088   10032 start.go:364] duration metric: took 25.625µs to acquireMachinesLock for "embed-certs-613000"
	I0729 11:00:58.805100   10032 start.go:93] Provisioning new machine with config: &{Name:embed-certs-613000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.3 ClusterName:embed-certs-613000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 11:00:58.805122   10032 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 11:00:58.808545   10032 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 11:00:58.824833   10032 start.go:159] libmachine.API.Create for "embed-certs-613000" (driver="qemu2")
	I0729 11:00:58.824857   10032 client.go:168] LocalClient.Create starting
	I0729 11:00:58.824916   10032 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/ca.pem
	I0729 11:00:58.824945   10032 main.go:141] libmachine: Decoding PEM data...
	I0729 11:00:58.824958   10032 main.go:141] libmachine: Parsing certificate...
	I0729 11:00:58.825011   10032 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/cert.pem
	I0729 11:00:58.825044   10032 main.go:141] libmachine: Decoding PEM data...
	I0729 11:00:58.825057   10032 main.go:141] libmachine: Parsing certificate...
	I0729 11:00:58.825408   10032 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19339-6071/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0729 11:00:58.975007   10032 main.go:141] libmachine: Creating SSH key...
	I0729 11:00:59.042591   10032 main.go:141] libmachine: Creating Disk image...
	I0729 11:00:59.042597   10032 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 11:00:59.042825   10032 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/embed-certs-613000/disk.qcow2.raw /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/embed-certs-613000/disk.qcow2
	I0729 11:00:59.052155   10032 main.go:141] libmachine: STDOUT: 
	I0729 11:00:59.052172   10032 main.go:141] libmachine: STDERR: 
	I0729 11:00:59.052225   10032 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/embed-certs-613000/disk.qcow2 +20000M
	I0729 11:00:59.060120   10032 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 11:00:59.060133   10032 main.go:141] libmachine: STDERR: 
	I0729 11:00:59.060146   10032 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/embed-certs-613000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/embed-certs-613000/disk.qcow2
	I0729 11:00:59.060154   10032 main.go:141] libmachine: Starting QEMU VM...
	I0729 11:00:59.060168   10032 qemu.go:418] Using hvf for hardware acceleration
	I0729 11:00:59.060192   10032 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/embed-certs-613000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/embed-certs-613000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/embed-certs-613000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:eb:a9:cc:62:af -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/embed-certs-613000/disk.qcow2
	I0729 11:00:59.061904   10032 main.go:141] libmachine: STDOUT: 
	I0729 11:00:59.061919   10032 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 11:00:59.061944   10032 client.go:171] duration metric: took 237.087292ms to LocalClient.Create
	I0729 11:01:01.064130   10032 start.go:128] duration metric: took 2.2590135s to createHost
	I0729 11:01:01.064218   10032 start.go:83] releasing machines lock for "embed-certs-613000", held for 2.259158416s
	W0729 11:01:01.064344   10032 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 11:01:01.075525   10032 out.go:177] * Deleting "embed-certs-613000" in qemu2 ...
	W0729 11:01:01.102572   10032 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 11:01:01.102609   10032 start.go:729] Will try again in 5 seconds ...
	I0729 11:01:06.104804   10032 start.go:360] acquireMachinesLock for embed-certs-613000: {Name:mk9e50f97d2386769ca02400541d9347efa17b63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 11:01:06.105233   10032 start.go:364] duration metric: took 313.333µs to acquireMachinesLock for "embed-certs-613000"
	I0729 11:01:06.105350   10032 start.go:93] Provisioning new machine with config: &{Name:embed-certs-613000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.3 ClusterName:embed-certs-613000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 11:01:06.105611   10032 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 11:01:06.110423   10032 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 11:01:06.158770   10032 start.go:159] libmachine.API.Create for "embed-certs-613000" (driver="qemu2")
	I0729 11:01:06.158835   10032 client.go:168] LocalClient.Create starting
	I0729 11:01:06.158953   10032 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/ca.pem
	I0729 11:01:06.159009   10032 main.go:141] libmachine: Decoding PEM data...
	I0729 11:01:06.159027   10032 main.go:141] libmachine: Parsing certificate...
	I0729 11:01:06.159102   10032 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/cert.pem
	I0729 11:01:06.159146   10032 main.go:141] libmachine: Decoding PEM data...
	I0729 11:01:06.159164   10032 main.go:141] libmachine: Parsing certificate...
	I0729 11:01:06.159709   10032 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19339-6071/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0729 11:01:06.354288   10032 main.go:141] libmachine: Creating SSH key...
	I0729 11:01:06.472934   10032 main.go:141] libmachine: Creating Disk image...
	I0729 11:01:06.472940   10032 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 11:01:06.473149   10032 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/embed-certs-613000/disk.qcow2.raw /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/embed-certs-613000/disk.qcow2
	I0729 11:01:06.482498   10032 main.go:141] libmachine: STDOUT: 
	I0729 11:01:06.482513   10032 main.go:141] libmachine: STDERR: 
	I0729 11:01:06.482565   10032 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/embed-certs-613000/disk.qcow2 +20000M
	I0729 11:01:06.490351   10032 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 11:01:06.490362   10032 main.go:141] libmachine: STDERR: 
	I0729 11:01:06.490373   10032 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/embed-certs-613000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/embed-certs-613000/disk.qcow2
	I0729 11:01:06.490377   10032 main.go:141] libmachine: Starting QEMU VM...
	I0729 11:01:06.490395   10032 qemu.go:418] Using hvf for hardware acceleration
	I0729 11:01:06.490427   10032 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/embed-certs-613000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/embed-certs-613000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/embed-certs-613000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:b5:10:07:7d:c9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/embed-certs-613000/disk.qcow2
	I0729 11:01:06.492102   10032 main.go:141] libmachine: STDOUT: 
	I0729 11:01:06.492118   10032 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 11:01:06.492130   10032 client.go:171] duration metric: took 333.294959ms to LocalClient.Create
	I0729 11:01:08.494257   10032 start.go:128] duration metric: took 2.388662s to createHost
	I0729 11:01:08.494311   10032 start.go:83] releasing machines lock for "embed-certs-613000", held for 2.389096208s
	W0729 11:01:08.494727   10032 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-613000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-613000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 11:01:08.505343   10032 out.go:177] 
	W0729 11:01:08.513444   10032 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 11:01:08.513473   10032 out.go:239] * 
	* 
	W0729 11:01:08.516026   10032 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 11:01:08.525328   10032 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-613000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-613000 -n embed-certs-613000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-613000 -n embed-certs-613000: exit status 7 (66.283041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-613000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (9.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-630000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-630000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (9.880181167s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-630000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19339
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-630000" primary control-plane node in "default-k8s-diff-port-630000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-630000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 11:01:03.017829   10052 out.go:291] Setting OutFile to fd 1 ...
	I0729 11:01:03.018018   10052 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:01:03.018021   10052 out.go:304] Setting ErrFile to fd 2...
	I0729 11:01:03.018024   10052 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:01:03.018144   10052 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 11:01:03.019108   10052 out.go:298] Setting JSON to false
	I0729 11:01:03.035060   10052 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5432,"bootTime":1722270631,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 11:01:03.035124   10052 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 11:01:03.040606   10052 out.go:177] * [default-k8s-diff-port-630000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 11:01:03.047481   10052 out.go:177]   - MINIKUBE_LOCATION=19339
	I0729 11:01:03.047532   10052 notify.go:220] Checking for updates...
	I0729 11:01:03.054294   10052 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	I0729 11:01:03.057499   10052 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 11:01:03.060498   10052 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 11:01:03.063482   10052 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	I0729 11:01:03.066452   10052 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 11:01:03.069909   10052 config.go:182] Loaded profile config "embed-certs-613000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 11:01:03.069972   10052 config.go:182] Loaded profile config "multinode-263000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 11:01:03.070027   10052 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 11:01:03.074469   10052 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 11:01:03.085452   10052 start.go:297] selected driver: qemu2
	I0729 11:01:03.085459   10052 start.go:901] validating driver "qemu2" against <nil>
	I0729 11:01:03.085466   10052 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 11:01:03.087885   10052 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 11:01:03.091509   10052 out.go:177] * Automatically selected the socket_vmnet network
	I0729 11:01:03.094532   10052 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 11:01:03.094563   10052 cni.go:84] Creating CNI manager for ""
	I0729 11:01:03.094572   10052 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 11:01:03.094576   10052 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 11:01:03.094615   10052 start.go:340] cluster config:
	{Name:default-k8s-diff-port-630000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-630000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:01:03.098412   10052 iso.go:125] acquiring lock: {Name:mk2808e0b9510c77af2c0862d3450f3cc996acba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 11:01:03.107397   10052 out.go:177] * Starting "default-k8s-diff-port-630000" primary control-plane node in "default-k8s-diff-port-630000" cluster
	I0729 11:01:03.111459   10052 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 11:01:03.111475   10052 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 11:01:03.111488   10052 cache.go:56] Caching tarball of preloaded images
	I0729 11:01:03.111554   10052 preload.go:172] Found /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 11:01:03.111560   10052 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 11:01:03.111634   10052 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/default-k8s-diff-port-630000/config.json ...
	I0729 11:01:03.111649   10052 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/default-k8s-diff-port-630000/config.json: {Name:mk9d02b4119d4c88e69a3d25e0ae36bfd09f5092 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:01:03.112069   10052 start.go:360] acquireMachinesLock for default-k8s-diff-port-630000: {Name:mk9e50f97d2386769ca02400541d9347efa17b63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 11:01:03.112105   10052 start.go:364] duration metric: took 29.333µs to acquireMachinesLock for "default-k8s-diff-port-630000"
	I0729 11:01:03.112118   10052 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-630000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-630000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 11:01:03.112158   10052 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 11:01:03.116475   10052 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 11:01:03.134751   10052 start.go:159] libmachine.API.Create for "default-k8s-diff-port-630000" (driver="qemu2")
	I0729 11:01:03.134777   10052 client.go:168] LocalClient.Create starting
	I0729 11:01:03.134840   10052 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/ca.pem
	I0729 11:01:03.134877   10052 main.go:141] libmachine: Decoding PEM data...
	I0729 11:01:03.134886   10052 main.go:141] libmachine: Parsing certificate...
	I0729 11:01:03.134925   10052 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/cert.pem
	I0729 11:01:03.134949   10052 main.go:141] libmachine: Decoding PEM data...
	I0729 11:01:03.134958   10052 main.go:141] libmachine: Parsing certificate...
	I0729 11:01:03.135437   10052 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19339-6071/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0729 11:01:03.311511   10052 main.go:141] libmachine: Creating SSH key...
	I0729 11:01:03.420405   10052 main.go:141] libmachine: Creating Disk image...
	I0729 11:01:03.420410   10052 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 11:01:03.420627   10052 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/default-k8s-diff-port-630000/disk.qcow2.raw /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/default-k8s-diff-port-630000/disk.qcow2
	I0729 11:01:03.429677   10052 main.go:141] libmachine: STDOUT: 
	I0729 11:01:03.429709   10052 main.go:141] libmachine: STDERR: 
	I0729 11:01:03.429757   10052 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/default-k8s-diff-port-630000/disk.qcow2 +20000M
	I0729 11:01:03.437570   10052 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 11:01:03.437593   10052 main.go:141] libmachine: STDERR: 
	I0729 11:01:03.437611   10052 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/default-k8s-diff-port-630000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/default-k8s-diff-port-630000/disk.qcow2
	I0729 11:01:03.437618   10052 main.go:141] libmachine: Starting QEMU VM...
	I0729 11:01:03.437632   10052 qemu.go:418] Using hvf for hardware acceleration
	I0729 11:01:03.437659   10052 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/default-k8s-diff-port-630000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/default-k8s-diff-port-630000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/default-k8s-diff-port-630000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:0b:aa:e0:43:4a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/default-k8s-diff-port-630000/disk.qcow2
	I0729 11:01:03.439267   10052 main.go:141] libmachine: STDOUT: 
	I0729 11:01:03.439291   10052 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 11:01:03.439309   10052 client.go:171] duration metric: took 304.532666ms to LocalClient.Create
	I0729 11:01:05.441567   10052 start.go:128] duration metric: took 2.329420417s to createHost
	I0729 11:01:05.441652   10052 start.go:83] releasing machines lock for "default-k8s-diff-port-630000", held for 2.329575917s
	W0729 11:01:05.441696   10052 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 11:01:05.455759   10052 out.go:177] * Deleting "default-k8s-diff-port-630000" in qemu2 ...
	W0729 11:01:05.487112   10052 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 11:01:05.487150   10052 start.go:729] Will try again in 5 seconds ...
	I0729 11:01:10.489227   10052 start.go:360] acquireMachinesLock for default-k8s-diff-port-630000: {Name:mk9e50f97d2386769ca02400541d9347efa17b63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 11:01:10.489681   10052 start.go:364] duration metric: took 382.375µs to acquireMachinesLock for "default-k8s-diff-port-630000"
	I0729 11:01:10.489784   10052 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-630000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-630000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 11:01:10.490034   10052 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 11:01:10.499529   10052 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 11:01:10.551225   10052 start.go:159] libmachine.API.Create for "default-k8s-diff-port-630000" (driver="qemu2")
	I0729 11:01:10.551272   10052 client.go:168] LocalClient.Create starting
	I0729 11:01:10.551370   10052 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/ca.pem
	I0729 11:01:10.551423   10052 main.go:141] libmachine: Decoding PEM data...
	I0729 11:01:10.551440   10052 main.go:141] libmachine: Parsing certificate...
	I0729 11:01:10.551500   10052 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/cert.pem
	I0729 11:01:10.551530   10052 main.go:141] libmachine: Decoding PEM data...
	I0729 11:01:10.551545   10052 main.go:141] libmachine: Parsing certificate...
	I0729 11:01:10.552095   10052 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19339-6071/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0729 11:01:10.712550   10052 main.go:141] libmachine: Creating SSH key...
	I0729 11:01:10.804718   10052 main.go:141] libmachine: Creating Disk image...
	I0729 11:01:10.804728   10052 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 11:01:10.804957   10052 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/default-k8s-diff-port-630000/disk.qcow2.raw /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/default-k8s-diff-port-630000/disk.qcow2
	I0729 11:01:10.813948   10052 main.go:141] libmachine: STDOUT: 
	I0729 11:01:10.813975   10052 main.go:141] libmachine: STDERR: 
	I0729 11:01:10.814049   10052 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/default-k8s-diff-port-630000/disk.qcow2 +20000M
	I0729 11:01:10.821998   10052 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 11:01:10.822016   10052 main.go:141] libmachine: STDERR: 
	I0729 11:01:10.822027   10052 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/default-k8s-diff-port-630000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/default-k8s-diff-port-630000/disk.qcow2
	I0729 11:01:10.822032   10052 main.go:141] libmachine: Starting QEMU VM...
	I0729 11:01:10.822044   10052 qemu.go:418] Using hvf for hardware acceleration
	I0729 11:01:10.822077   10052 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/default-k8s-diff-port-630000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/default-k8s-diff-port-630000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/default-k8s-diff-port-630000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:76:e2:40:69:0b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/default-k8s-diff-port-630000/disk.qcow2
	I0729 11:01:10.823654   10052 main.go:141] libmachine: STDOUT: 
	I0729 11:01:10.823671   10052 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 11:01:10.823683   10052 client.go:171] duration metric: took 272.410375ms to LocalClient.Create
	I0729 11:01:12.825852   10052 start.go:128] duration metric: took 2.335808833s to createHost
	I0729 11:01:12.825937   10052 start.go:83] releasing machines lock for "default-k8s-diff-port-630000", held for 2.336271792s
	W0729 11:01:12.826315   10052 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-630000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-630000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 11:01:12.838841   10052 out.go:177] 
	W0729 11:01:12.842980   10052 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 11:01:12.843007   10052 out.go:239] * 
	* 
	W0729 11:01:12.845695   10052 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 11:01:12.853851   10052 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-630000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-630000 -n default-k8s-diff-port-630000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-630000 -n default-k8s-diff-port-630000: exit status 7 (63.258333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-630000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.95s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-613000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-613000 create -f testdata/busybox.yaml: exit status 1 (30.4715ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-613000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-613000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-613000 -n embed-certs-613000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-613000 -n embed-certs-613000: exit status 7 (29.331583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-613000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-613000 -n embed-certs-613000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-613000 -n embed-certs-613000: exit status 7 (28.495417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-613000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-613000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-613000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-613000 describe deploy/metrics-server -n kube-system: exit status 1 (26.945833ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-613000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-613000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-613000 -n embed-certs-613000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-613000 -n embed-certs-613000: exit status 7 (28.682541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-613000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.81s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-613000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-613000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (5.746052417s)

                                                
                                                
-- stdout --
	* [embed-certs-613000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19339
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-613000" primary control-plane node in "embed-certs-613000" cluster
	* Restarting existing qemu2 VM for "embed-certs-613000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-613000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 11:01:12.194275   10104 out.go:291] Setting OutFile to fd 1 ...
	I0729 11:01:12.194417   10104 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:01:12.194420   10104 out.go:304] Setting ErrFile to fd 2...
	I0729 11:01:12.194422   10104 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:01:12.194565   10104 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 11:01:12.195503   10104 out.go:298] Setting JSON to false
	I0729 11:01:12.211605   10104 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5441,"bootTime":1722270631,"procs":450,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 11:01:12.211669   10104 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 11:01:12.215337   10104 out.go:177] * [embed-certs-613000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 11:01:12.221282   10104 out.go:177]   - MINIKUBE_LOCATION=19339
	I0729 11:01:12.221354   10104 notify.go:220] Checking for updates...
	I0729 11:01:12.228224   10104 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	I0729 11:01:12.232235   10104 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 11:01:12.235273   10104 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 11:01:12.238311   10104 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	I0729 11:01:12.241234   10104 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 11:01:12.244475   10104 config.go:182] Loaded profile config "embed-certs-613000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 11:01:12.244745   10104 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 11:01:12.249174   10104 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 11:01:12.256215   10104 start.go:297] selected driver: qemu2
	I0729 11:01:12.256221   10104 start.go:901] validating driver "qemu2" against &{Name:embed-certs-613000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:embed-certs-613000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:01:12.256280   10104 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 11:01:12.258774   10104 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 11:01:12.258821   10104 cni.go:84] Creating CNI manager for ""
	I0729 11:01:12.258828   10104 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 11:01:12.258855   10104 start.go:340] cluster config:
	{Name:embed-certs-613000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-613000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:01:12.262629   10104 iso.go:125] acquiring lock: {Name:mk2808e0b9510c77af2c0862d3450f3cc996acba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 11:01:12.271223   10104 out.go:177] * Starting "embed-certs-613000" primary control-plane node in "embed-certs-613000" cluster
	I0729 11:01:12.275111   10104 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 11:01:12.275125   10104 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 11:01:12.275135   10104 cache.go:56] Caching tarball of preloaded images
	I0729 11:01:12.275203   10104 preload.go:172] Found /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 11:01:12.275209   10104 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 11:01:12.275260   10104 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/embed-certs-613000/config.json ...
	I0729 11:01:12.275733   10104 start.go:360] acquireMachinesLock for embed-certs-613000: {Name:mk9e50f97d2386769ca02400541d9347efa17b63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 11:01:12.826125   10104 start.go:364] duration metric: took 550.339791ms to acquireMachinesLock for "embed-certs-613000"
	I0729 11:01:12.826269   10104 start.go:96] Skipping create...Using existing machine configuration
	I0729 11:01:12.826302   10104 fix.go:54] fixHost starting: 
	I0729 11:01:12.827013   10104 fix.go:112] recreateIfNeeded on embed-certs-613000: state=Stopped err=<nil>
	W0729 11:01:12.827062   10104 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 11:01:12.838835   10104 out.go:177] * Restarting existing qemu2 VM for "embed-certs-613000" ...
	I0729 11:01:12.842950   10104 qemu.go:418] Using hvf for hardware acceleration
	I0729 11:01:12.843122   10104 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/embed-certs-613000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/embed-certs-613000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/embed-certs-613000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:b5:10:07:7d:c9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/embed-certs-613000/disk.qcow2
	I0729 11:01:12.853149   10104 main.go:141] libmachine: STDOUT: 
	I0729 11:01:12.853242   10104 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 11:01:12.853417   10104 fix.go:56] duration metric: took 27.104083ms for fixHost
	I0729 11:01:12.853440   10104 start.go:83] releasing machines lock for "embed-certs-613000", held for 27.285ms
	W0729 11:01:12.853479   10104 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 11:01:12.853656   10104 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 11:01:12.853683   10104 start.go:729] Will try again in 5 seconds ...
	I0729 11:01:17.855814   10104 start.go:360] acquireMachinesLock for embed-certs-613000: {Name:mk9e50f97d2386769ca02400541d9347efa17b63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 11:01:17.856243   10104 start.go:364] duration metric: took 307.959µs to acquireMachinesLock for "embed-certs-613000"
	I0729 11:01:17.856355   10104 start.go:96] Skipping create...Using existing machine configuration
	I0729 11:01:17.856379   10104 fix.go:54] fixHost starting: 
	I0729 11:01:17.857114   10104 fix.go:112] recreateIfNeeded on embed-certs-613000: state=Stopped err=<nil>
	W0729 11:01:17.857141   10104 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 11:01:17.862630   10104 out.go:177] * Restarting existing qemu2 VM for "embed-certs-613000" ...
	I0729 11:01:17.869659   10104 qemu.go:418] Using hvf for hardware acceleration
	I0729 11:01:17.869816   10104 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/embed-certs-613000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/embed-certs-613000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/embed-certs-613000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:b5:10:07:7d:c9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/embed-certs-613000/disk.qcow2
	I0729 11:01:17.878739   10104 main.go:141] libmachine: STDOUT: 
	I0729 11:01:17.878801   10104 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 11:01:17.878870   10104 fix.go:56] duration metric: took 22.498041ms for fixHost
	I0729 11:01:17.878886   10104 start.go:83] releasing machines lock for "embed-certs-613000", held for 22.615208ms
	W0729 11:01:17.879052   10104 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-613000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-613000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 11:01:17.885603   10104 out.go:177] 
	W0729 11:01:17.886947   10104 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 11:01:17.886964   10104 out.go:239] * 
	* 
	W0729 11:01:17.889746   10104 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 11:01:17.898541   10104 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-613000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-613000 -n embed-certs-613000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-613000 -n embed-certs-613000: exit status 7 (62.875375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-613000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.81s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-630000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-630000 create -f testdata/busybox.yaml: exit status 1 (29.171959ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-630000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-630000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-630000 -n default-k8s-diff-port-630000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-630000 -n default-k8s-diff-port-630000: exit status 7 (29.037042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-630000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-630000 -n default-k8s-diff-port-630000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-630000 -n default-k8s-diff-port-630000: exit status 7 (29.2195ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-630000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-630000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-630000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-630000 describe deploy/metrics-server -n kube-system: exit status 1 (26.392958ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-630000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-630000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-630000 -n default-k8s-diff-port-630000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-630000 -n default-k8s-diff-port-630000: exit status 7 (28.403916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-630000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-630000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-630000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (5.1920825s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-630000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19339
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-630000" primary control-plane node in "default-k8s-diff-port-630000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-630000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-630000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 11:01:16.903747   10145 out.go:291] Setting OutFile to fd 1 ...
	I0729 11:01:16.903968   10145 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:01:16.903972   10145 out.go:304] Setting ErrFile to fd 2...
	I0729 11:01:16.903975   10145 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:01:16.904129   10145 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 11:01:16.905305   10145 out.go:298] Setting JSON to false
	I0729 11:01:16.921568   10145 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5445,"bootTime":1722270631,"procs":450,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 11:01:16.921638   10145 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 11:01:16.925569   10145 out.go:177] * [default-k8s-diff-port-630000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 11:01:16.932673   10145 out.go:177]   - MINIKUBE_LOCATION=19339
	I0729 11:01:16.932743   10145 notify.go:220] Checking for updates...
	I0729 11:01:16.939541   10145 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	I0729 11:01:16.942601   10145 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 11:01:16.945504   10145 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 11:01:16.948592   10145 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	I0729 11:01:16.951622   10145 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 11:01:16.954850   10145 config.go:182] Loaded profile config "default-k8s-diff-port-630000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 11:01:16.955134   10145 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 11:01:16.959609   10145 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 11:01:16.966551   10145 start.go:297] selected driver: qemu2
	I0729 11:01:16.966557   10145 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-630000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-630000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:01:16.966622   10145 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 11:01:16.968948   10145 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 11:01:16.969064   10145 cni.go:84] Creating CNI manager for ""
	I0729 11:01:16.969073   10145 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 11:01:16.969112   10145 start.go:340] cluster config:
	{Name:default-k8s-diff-port-630000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-630000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:01:16.972654   10145 iso.go:125] acquiring lock: {Name:mk2808e0b9510c77af2c0862d3450f3cc996acba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 11:01:16.979511   10145 out.go:177] * Starting "default-k8s-diff-port-630000" primary control-plane node in "default-k8s-diff-port-630000" cluster
	I0729 11:01:16.983550   10145 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 11:01:16.983565   10145 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 11:01:16.983575   10145 cache.go:56] Caching tarball of preloaded images
	I0729 11:01:16.983626   10145 preload.go:172] Found /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 11:01:16.983632   10145 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 11:01:16.983703   10145 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/default-k8s-diff-port-630000/config.json ...
	I0729 11:01:16.984192   10145 start.go:360] acquireMachinesLock for default-k8s-diff-port-630000: {Name:mk9e50f97d2386769ca02400541d9347efa17b63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 11:01:16.984220   10145 start.go:364] duration metric: took 22.5µs to acquireMachinesLock for "default-k8s-diff-port-630000"
	I0729 11:01:16.984234   10145 start.go:96] Skipping create...Using existing machine configuration
	I0729 11:01:16.984241   10145 fix.go:54] fixHost starting: 
	I0729 11:01:16.984358   10145 fix.go:112] recreateIfNeeded on default-k8s-diff-port-630000: state=Stopped err=<nil>
	W0729 11:01:16.984369   10145 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 11:01:16.988569   10145 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-630000" ...
	I0729 11:01:16.996581   10145 qemu.go:418] Using hvf for hardware acceleration
	I0729 11:01:16.996620   10145 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/default-k8s-diff-port-630000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/default-k8s-diff-port-630000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/default-k8s-diff-port-630000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:76:e2:40:69:0b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/default-k8s-diff-port-630000/disk.qcow2
	I0729 11:01:16.998658   10145 main.go:141] libmachine: STDOUT: 
	I0729 11:01:16.998678   10145 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 11:01:16.998706   10145 fix.go:56] duration metric: took 14.465208ms for fixHost
	I0729 11:01:16.998711   10145 start.go:83] releasing machines lock for "default-k8s-diff-port-630000", held for 14.487042ms
	W0729 11:01:16.998717   10145 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 11:01:16.998746   10145 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 11:01:16.998752   10145 start.go:729] Will try again in 5 seconds ...
	I0729 11:01:22.000834   10145 start.go:360] acquireMachinesLock for default-k8s-diff-port-630000: {Name:mk9e50f97d2386769ca02400541d9347efa17b63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 11:01:22.001291   10145 start.go:364] duration metric: took 355.416µs to acquireMachinesLock for "default-k8s-diff-port-630000"
	I0729 11:01:22.001426   10145 start.go:96] Skipping create...Using existing machine configuration
	I0729 11:01:22.001451   10145 fix.go:54] fixHost starting: 
	I0729 11:01:22.002244   10145 fix.go:112] recreateIfNeeded on default-k8s-diff-port-630000: state=Stopped err=<nil>
	W0729 11:01:22.002273   10145 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 11:01:22.015549   10145 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-630000" ...
	I0729 11:01:22.020876   10145 qemu.go:418] Using hvf for hardware acceleration
	I0729 11:01:22.021040   10145 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/default-k8s-diff-port-630000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/default-k8s-diff-port-630000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/default-k8s-diff-port-630000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:76:e2:40:69:0b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/default-k8s-diff-port-630000/disk.qcow2
	I0729 11:01:22.030108   10145 main.go:141] libmachine: STDOUT: 
	I0729 11:01:22.030191   10145 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 11:01:22.030283   10145 fix.go:56] duration metric: took 28.834042ms for fixHost
	I0729 11:01:22.030306   10145 start.go:83] releasing machines lock for "default-k8s-diff-port-630000", held for 28.992041ms
	W0729 11:01:22.030541   10145 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-630000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-630000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 11:01:22.039693   10145 out.go:177] 
	W0729 11:01:22.042833   10145 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 11:01:22.042871   10145 out.go:239] * 
	* 
	W0729 11:01:22.045377   10145 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 11:01:22.054766   10145 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-630000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-630000 -n default-k8s-diff-port-630000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-630000 -n default-k8s-diff-port-630000: exit status 7 (65.715916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-630000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-613000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-613000 -n embed-certs-613000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-613000 -n embed-certs-613000: exit status 7 (31.821458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-613000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-613000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-613000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-613000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.694542ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-613000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-613000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-613000 -n embed-certs-613000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-613000 -n embed-certs-613000: exit status 7 (29.057542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-613000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-613000 image list --format=json
start_stop_delete_test.go:304: v1.30.3 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.3",
- 	"registry.k8s.io/kube-controller-manager:v1.30.3",
- 	"registry.k8s.io/kube-proxy:v1.30.3",
- 	"registry.k8s.io/kube-scheduler:v1.30.3",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-613000 -n embed-certs-613000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-613000 -n embed-certs-613000: exit status 7 (29.13625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-613000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-613000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-613000 --alsologtostderr -v=1: exit status 83 (41.501ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-613000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-613000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 11:01:18.161218   10164 out.go:291] Setting OutFile to fd 1 ...
	I0729 11:01:18.161372   10164 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:01:18.161376   10164 out.go:304] Setting ErrFile to fd 2...
	I0729 11:01:18.161379   10164 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:01:18.161519   10164 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 11:01:18.161745   10164 out.go:298] Setting JSON to false
	I0729 11:01:18.161751   10164 mustload.go:65] Loading cluster: embed-certs-613000
	I0729 11:01:18.161950   10164 config.go:182] Loaded profile config "embed-certs-613000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 11:01:18.166341   10164 out.go:177] * The control-plane node embed-certs-613000 host is not running: state=Stopped
	I0729 11:01:18.170342   10164 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-613000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-613000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-613000 -n embed-certs-613000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-613000 -n embed-certs-613000: exit status 7 (28.256042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-613000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-613000 -n embed-certs-613000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-613000 -n embed-certs-613000: exit status 7 (28.887792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-613000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.88s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-377000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-377000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (9.810762291s)

                                                
                                                
-- stdout --
	* [newest-cni-377000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19339
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-377000" primary control-plane node in "newest-cni-377000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-377000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 11:01:18.466399   10182 out.go:291] Setting OutFile to fd 1 ...
	I0729 11:01:18.466514   10182 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:01:18.466517   10182 out.go:304] Setting ErrFile to fd 2...
	I0729 11:01:18.466520   10182 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:01:18.466632   10182 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 11:01:18.467702   10182 out.go:298] Setting JSON to false
	I0729 11:01:18.484027   10182 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5447,"bootTime":1722270631,"procs":451,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 11:01:18.484109   10182 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 11:01:18.489365   10182 out.go:177] * [newest-cni-377000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 11:01:18.495274   10182 out.go:177]   - MINIKUBE_LOCATION=19339
	I0729 11:01:18.495328   10182 notify.go:220] Checking for updates...
	I0729 11:01:18.503374   10182 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	I0729 11:01:18.506283   10182 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 11:01:18.509348   10182 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 11:01:18.512356   10182 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	I0729 11:01:18.515311   10182 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 11:01:18.518619   10182 config.go:182] Loaded profile config "default-k8s-diff-port-630000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 11:01:18.518684   10182 config.go:182] Loaded profile config "multinode-263000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 11:01:18.518744   10182 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 11:01:18.521310   10182 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 11:01:18.528351   10182 start.go:297] selected driver: qemu2
	I0729 11:01:18.528359   10182 start.go:901] validating driver "qemu2" against <nil>
	I0729 11:01:18.528366   10182 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 11:01:18.530721   10182 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0729 11:01:18.530748   10182 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0729 11:01:18.533332   10182 out.go:177] * Automatically selected the socket_vmnet network
	I0729 11:01:18.540412   10182 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0729 11:01:18.540432   10182 cni.go:84] Creating CNI manager for ""
	I0729 11:01:18.540441   10182 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 11:01:18.540447   10182 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 11:01:18.540483   10182 start.go:340] cluster config:
	{Name:newest-cni-377000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-377000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:01:18.544368   10182 iso.go:125] acquiring lock: {Name:mk2808e0b9510c77af2c0862d3450f3cc996acba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 11:01:18.552358   10182 out.go:177] * Starting "newest-cni-377000" primary control-plane node in "newest-cni-377000" cluster
	I0729 11:01:18.556287   10182 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0729 11:01:18.556302   10182 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0729 11:01:18.556317   10182 cache.go:56] Caching tarball of preloaded images
	I0729 11:01:18.556383   10182 preload.go:172] Found /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 11:01:18.556389   10182 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0729 11:01:18.556455   10182 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/newest-cni-377000/config.json ...
	I0729 11:01:18.556476   10182 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/newest-cni-377000/config.json: {Name:mk7a0ff65e60760117752d90ddbc59b2aa74ab89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:01:18.556886   10182 start.go:360] acquireMachinesLock for newest-cni-377000: {Name:mk9e50f97d2386769ca02400541d9347efa17b63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 11:01:18.556921   10182 start.go:364] duration metric: took 29.667µs to acquireMachinesLock for "newest-cni-377000"
	I0729 11:01:18.556933   10182 start.go:93] Provisioning new machine with config: &{Name:newest-cni-377000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-377000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Us
ers:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 11:01:18.556964   10182 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 11:01:18.566322   10182 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 11:01:18.584812   10182 start.go:159] libmachine.API.Create for "newest-cni-377000" (driver="qemu2")
	I0729 11:01:18.584840   10182 client.go:168] LocalClient.Create starting
	I0729 11:01:18.584907   10182 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/ca.pem
	I0729 11:01:18.584937   10182 main.go:141] libmachine: Decoding PEM data...
	I0729 11:01:18.584948   10182 main.go:141] libmachine: Parsing certificate...
	I0729 11:01:18.584990   10182 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/cert.pem
	I0729 11:01:18.585015   10182 main.go:141] libmachine: Decoding PEM data...
	I0729 11:01:18.585021   10182 main.go:141] libmachine: Parsing certificate...
	I0729 11:01:18.585451   10182 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19339-6071/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0729 11:01:18.758599   10182 main.go:141] libmachine: Creating SSH key...
	I0729 11:01:18.805958   10182 main.go:141] libmachine: Creating Disk image...
	I0729 11:01:18.805964   10182 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 11:01:18.806184   10182 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/newest-cni-377000/disk.qcow2.raw /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/newest-cni-377000/disk.qcow2
	I0729 11:01:18.815409   10182 main.go:141] libmachine: STDOUT: 
	I0729 11:01:18.815422   10182 main.go:141] libmachine: STDERR: 
	I0729 11:01:18.815476   10182 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/newest-cni-377000/disk.qcow2 +20000M
	I0729 11:01:18.823236   10182 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 11:01:18.823249   10182 main.go:141] libmachine: STDERR: 
	I0729 11:01:18.823264   10182 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/newest-cni-377000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/newest-cni-377000/disk.qcow2
	I0729 11:01:18.823270   10182 main.go:141] libmachine: Starting QEMU VM...
	I0729 11:01:18.823281   10182 qemu.go:418] Using hvf for hardware acceleration
	I0729 11:01:18.823308   10182 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/newest-cni-377000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/newest-cni-377000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/newest-cni-377000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:36:b6:33:9b:08 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/newest-cni-377000/disk.qcow2
	I0729 11:01:18.824958   10182 main.go:141] libmachine: STDOUT: 
	I0729 11:01:18.824971   10182 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 11:01:18.824990   10182 client.go:171] duration metric: took 240.147083ms to LocalClient.Create
	I0729 11:01:20.827126   10182 start.go:128] duration metric: took 2.2701825s to createHost
	I0729 11:01:20.827189   10182 start.go:83] releasing machines lock for "newest-cni-377000", held for 2.270296333s
	W0729 11:01:20.827253   10182 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 11:01:20.838484   10182 out.go:177] * Deleting "newest-cni-377000" in qemu2 ...
	W0729 11:01:20.868628   10182 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 11:01:20.868657   10182 start.go:729] Will try again in 5 seconds ...
	I0729 11:01:25.868849   10182 start.go:360] acquireMachinesLock for newest-cni-377000: {Name:mk9e50f97d2386769ca02400541d9347efa17b63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 11:01:25.869294   10182 start.go:364] duration metric: took 345.625µs to acquireMachinesLock for "newest-cni-377000"
	I0729 11:01:25.869481   10182 start.go:93] Provisioning new machine with config: &{Name:newest-cni-377000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-377000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Us
ers:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 11:01:25.869700   10182 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 11:01:25.875352   10182 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 11:01:25.923999   10182 start.go:159] libmachine.API.Create for "newest-cni-377000" (driver="qemu2")
	I0729 11:01:25.924076   10182 client.go:168] LocalClient.Create starting
	I0729 11:01:25.924268   10182 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/ca.pem
	I0729 11:01:25.924344   10182 main.go:141] libmachine: Decoding PEM data...
	I0729 11:01:25.924361   10182 main.go:141] libmachine: Parsing certificate...
	I0729 11:01:25.924429   10182 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19339-6071/.minikube/certs/cert.pem
	I0729 11:01:25.924475   10182 main.go:141] libmachine: Decoding PEM data...
	I0729 11:01:25.924489   10182 main.go:141] libmachine: Parsing certificate...
	I0729 11:01:25.925009   10182 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19339-6071/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0729 11:01:26.080207   10182 main.go:141] libmachine: Creating SSH key...
	I0729 11:01:26.188631   10182 main.go:141] libmachine: Creating Disk image...
	I0729 11:01:26.188639   10182 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 11:01:26.188834   10182 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/newest-cni-377000/disk.qcow2.raw /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/newest-cni-377000/disk.qcow2
	I0729 11:01:26.198021   10182 main.go:141] libmachine: STDOUT: 
	I0729 11:01:26.198039   10182 main.go:141] libmachine: STDERR: 
	I0729 11:01:26.198089   10182 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/newest-cni-377000/disk.qcow2 +20000M
	I0729 11:01:26.205937   10182 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 11:01:26.205950   10182 main.go:141] libmachine: STDERR: 
	I0729 11:01:26.205965   10182 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/newest-cni-377000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/newest-cni-377000/disk.qcow2
	I0729 11:01:26.205969   10182 main.go:141] libmachine: Starting QEMU VM...
	I0729 11:01:26.205982   10182 qemu.go:418] Using hvf for hardware acceleration
	I0729 11:01:26.206019   10182 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/newest-cni-377000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/newest-cni-377000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/newest-cni-377000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:34:01:1b:db:2a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/newest-cni-377000/disk.qcow2
	I0729 11:01:26.207678   10182 main.go:141] libmachine: STDOUT: 
	I0729 11:01:26.207695   10182 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 11:01:26.207708   10182 client.go:171] duration metric: took 283.606167ms to LocalClient.Create
	I0729 11:01:28.209850   10182 start.go:128] duration metric: took 2.340150792s to createHost
	I0729 11:01:28.209893   10182 start.go:83] releasing machines lock for "newest-cni-377000", held for 2.340612209s
	W0729 11:01:28.210186   10182 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-377000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-377000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 11:01:28.218678   10182 out.go:177] 
	W0729 11:01:28.225880   10182 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 11:01:28.225915   10182 out.go:239] * 
	* 
	W0729 11:01:28.229068   10182 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 11:01:28.238686   10182 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-377000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-377000 -n newest-cni-377000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-377000 -n newest-cni-377000: exit status 7 (68.266042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-377000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-630000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-630000 -n default-k8s-diff-port-630000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-630000 -n default-k8s-diff-port-630000: exit status 7 (32.477584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-630000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-630000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-630000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-630000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.576875ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-630000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-630000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-630000 -n default-k8s-diff-port-630000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-630000 -n default-k8s-diff-port-630000: exit status 7 (28.864958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-630000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-630000 image list --format=json
start_stop_delete_test.go:304: v1.30.3 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.3",
- 	"registry.k8s.io/kube-controller-manager:v1.30.3",
- 	"registry.k8s.io/kube-proxy:v1.30.3",
- 	"registry.k8s.io/kube-scheduler:v1.30.3",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-630000 -n default-k8s-diff-port-630000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-630000 -n default-k8s-diff-port-630000: exit status 7 (28.405166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-630000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-630000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-630000 --alsologtostderr -v=1: exit status 83 (40.23125ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-630000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-630000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 11:01:22.319364   10205 out.go:291] Setting OutFile to fd 1 ...
	I0729 11:01:22.319515   10205 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:01:22.319519   10205 out.go:304] Setting ErrFile to fd 2...
	I0729 11:01:22.319521   10205 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:01:22.319671   10205 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 11:01:22.319900   10205 out.go:298] Setting JSON to false
	I0729 11:01:22.319907   10205 mustload.go:65] Loading cluster: default-k8s-diff-port-630000
	I0729 11:01:22.320102   10205 config.go:182] Loaded profile config "default-k8s-diff-port-630000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 11:01:22.323712   10205 out.go:177] * The control-plane node default-k8s-diff-port-630000 host is not running: state=Stopped
	I0729 11:01:22.327739   10205 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-630000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-630000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-630000 -n default-k8s-diff-port-630000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-630000 -n default-k8s-diff-port-630000: exit status 7 (27.993958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-630000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-630000 -n default-k8s-diff-port-630000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-630000 -n default-k8s-diff-port-630000: exit status 7 (28.90325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-630000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-377000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-377000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (5.181204708s)

                                                
                                                
-- stdout --
	* [newest-cni-377000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19339
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-377000" primary control-plane node in "newest-cni-377000" cluster
	* Restarting existing qemu2 VM for "newest-cni-377000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-377000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 11:01:31.568816   10257 out.go:291] Setting OutFile to fd 1 ...
	I0729 11:01:31.568930   10257 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:01:31.568932   10257 out.go:304] Setting ErrFile to fd 2...
	I0729 11:01:31.568935   10257 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:01:31.569052   10257 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 11:01:31.570073   10257 out.go:298] Setting JSON to false
	I0729 11:01:31.586007   10257 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5460,"bootTime":1722270631,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 11:01:31.586074   10257 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 11:01:31.590292   10257 out.go:177] * [newest-cni-377000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 11:01:31.598245   10257 out.go:177]   - MINIKUBE_LOCATION=19339
	I0729 11:01:31.598297   10257 notify.go:220] Checking for updates...
	I0729 11:01:31.604092   10257 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	I0729 11:01:31.607188   10257 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 11:01:31.610208   10257 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 11:01:31.611601   10257 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	I0729 11:01:31.615243   10257 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 11:01:31.618549   10257 config.go:182] Loaded profile config "newest-cni-377000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0729 11:01:31.618846   10257 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 11:01:31.620575   10257 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 11:01:31.627197   10257 start.go:297] selected driver: qemu2
	I0729 11:01:31.627205   10257 start.go:901] validating driver "qemu2" against &{Name:newest-cni-377000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-377000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> Expos
edPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:01:31.627273   10257 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 11:01:31.629537   10257 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0729 11:01:31.629583   10257 cni.go:84] Creating CNI manager for ""
	I0729 11:01:31.629594   10257 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 11:01:31.629616   10257 start.go:340] cluster config:
	{Name:newest-cni-377000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-377000 Namespace:default A
PIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false
ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:01:31.633072   10257 iso.go:125] acquiring lock: {Name:mk2808e0b9510c77af2c0862d3450f3cc996acba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 11:01:31.641161   10257 out.go:177] * Starting "newest-cni-377000" primary control-plane node in "newest-cni-377000" cluster
	I0729 11:01:31.645206   10257 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0729 11:01:31.645222   10257 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0729 11:01:31.645235   10257 cache.go:56] Caching tarball of preloaded images
	I0729 11:01:31.645306   10257 preload.go:172] Found /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 11:01:31.645312   10257 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0729 11:01:31.645374   10257 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/newest-cni-377000/config.json ...
	I0729 11:01:31.645899   10257 start.go:360] acquireMachinesLock for newest-cni-377000: {Name:mk9e50f97d2386769ca02400541d9347efa17b63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 11:01:31.645930   10257 start.go:364] duration metric: took 24.917µs to acquireMachinesLock for "newest-cni-377000"
	I0729 11:01:31.645940   10257 start.go:96] Skipping create...Using existing machine configuration
	I0729 11:01:31.645945   10257 fix.go:54] fixHost starting: 
	I0729 11:01:31.646071   10257 fix.go:112] recreateIfNeeded on newest-cni-377000: state=Stopped err=<nil>
	W0729 11:01:31.646080   10257 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 11:01:31.650183   10257 out.go:177] * Restarting existing qemu2 VM for "newest-cni-377000" ...
	I0729 11:01:31.658159   10257 qemu.go:418] Using hvf for hardware acceleration
	I0729 11:01:31.658203   10257 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/newest-cni-377000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/newest-cni-377000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/newest-cni-377000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:34:01:1b:db:2a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/newest-cni-377000/disk.qcow2
	I0729 11:01:31.660353   10257 main.go:141] libmachine: STDOUT: 
	I0729 11:01:31.660372   10257 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 11:01:31.660402   10257 fix.go:56] duration metric: took 14.456667ms for fixHost
	I0729 11:01:31.660407   10257 start.go:83] releasing machines lock for "newest-cni-377000", held for 14.473ms
	W0729 11:01:31.660418   10257 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 11:01:31.660450   10257 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 11:01:31.660454   10257 start.go:729] Will try again in 5 seconds ...
	I0729 11:01:36.662586   10257 start.go:360] acquireMachinesLock for newest-cni-377000: {Name:mk9e50f97d2386769ca02400541d9347efa17b63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 11:01:36.662942   10257 start.go:364] duration metric: took 289µs to acquireMachinesLock for "newest-cni-377000"
	I0729 11:01:36.663063   10257 start.go:96] Skipping create...Using existing machine configuration
	I0729 11:01:36.663082   10257 fix.go:54] fixHost starting: 
	I0729 11:01:36.663751   10257 fix.go:112] recreateIfNeeded on newest-cni-377000: state=Stopped err=<nil>
	W0729 11:01:36.663776   10257 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 11:01:36.673239   10257 out.go:177] * Restarting existing qemu2 VM for "newest-cni-377000" ...
	I0729 11:01:36.676164   10257 qemu.go:418] Using hvf for hardware acceleration
	I0729 11:01:36.676391   10257 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/newest-cni-377000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19339-6071/.minikube/machines/newest-cni-377000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/newest-cni-377000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:34:01:1b:db:2a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19339-6071/.minikube/machines/newest-cni-377000/disk.qcow2
	I0729 11:01:36.685166   10257 main.go:141] libmachine: STDOUT: 
	I0729 11:01:36.685232   10257 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 11:01:36.685314   10257 fix.go:56] duration metric: took 22.232834ms for fixHost
	I0729 11:01:36.685332   10257 start.go:83] releasing machines lock for "newest-cni-377000", held for 22.370334ms
	W0729 11:01:36.685516   10257 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-377000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-377000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 11:01:36.693973   10257 out.go:177] 
	W0729 11:01:36.698267   10257 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 11:01:36.698291   10257 out.go:239] * 
	* 
	W0729 11:01:36.701066   10257 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 11:01:36.708121   10257 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-377000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-377000 -n newest-cni-377000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-377000 -n newest-cni-377000: exit status 7 (68.122625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-377000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-377000 image list --format=json
start_stop_delete_test.go:304: v1.31.0-beta.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.14-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0-beta.0",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-377000 -n newest-cni-377000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-377000 -n newest-cni-377000: exit status 7 (29.489333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-377000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-377000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-377000 --alsologtostderr -v=1: exit status 83 (41.091375ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-377000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-377000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 11:01:36.890754   10271 out.go:291] Setting OutFile to fd 1 ...
	I0729 11:01:36.890880   10271 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:01:36.890884   10271 out.go:304] Setting ErrFile to fd 2...
	I0729 11:01:36.890886   10271 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:01:36.891032   10271 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 11:01:36.891265   10271 out.go:298] Setting JSON to false
	I0729 11:01:36.891271   10271 mustload.go:65] Loading cluster: newest-cni-377000
	I0729 11:01:36.891480   10271 config.go:182] Loaded profile config "newest-cni-377000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0729 11:01:36.895118   10271 out.go:177] * The control-plane node newest-cni-377000 host is not running: state=Stopped
	I0729 11:01:36.898861   10271 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-377000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-377000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-377000 -n newest-cni-377000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-377000 -n newest-cni-377000: exit status 7 (29.337292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-377000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-377000 -n newest-cni-377000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-377000 -n newest-cni-377000: exit status 7 (29.923542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-377000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (86/266)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.12
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.1
12 TestDownloadOnly/v1.30.3/json-events 10.11
13 TestDownloadOnly/v1.30.3/preload-exists 0
16 TestDownloadOnly/v1.30.3/kubectl 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.08
18 TestDownloadOnly/v1.30.3/DeleteAll 0.11
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.1
21 TestDownloadOnly/v1.31.0-beta.0/json-events 10.33
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
25 TestDownloadOnly/v1.31.0-beta.0/kubectl 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.08
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 0.11
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 0.1
30 TestBinaryMirror 0.28
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
44 TestHyperKitDriverInstallOrUpdate 10.24
48 TestErrorSpam/start 0.38
49 TestErrorSpam/status 0.09
50 TestErrorSpam/pause 0.12
51 TestErrorSpam/unpause 0.12
52 TestErrorSpam/stop 10.21
55 TestFunctional/serial/CopySyncFile 0
57 TestFunctional/serial/AuditLog 0
63 TestFunctional/serial/CacheCmd/cache/add_remote 1.78
64 TestFunctional/serial/CacheCmd/cache/add_local 1.03
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
66 TestFunctional/serial/CacheCmd/cache/list 0.04
69 TestFunctional/serial/CacheCmd/cache/delete 0.07
78 TestFunctional/parallel/ConfigCmd 0.23
80 TestFunctional/parallel/DryRun 0.23
81 TestFunctional/parallel/InternationalLanguage 0.11
87 TestFunctional/parallel/AddonsCmd 0.09
102 TestFunctional/parallel/License 0.2
105 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
115 TestFunctional/parallel/ProfileCmd/profile_not_create 0.09
116 TestFunctional/parallel/ProfileCmd/profile_list 0.08
117 TestFunctional/parallel/ProfileCmd/profile_json_output 0.08
121 TestFunctional/parallel/Version/short 0.04
128 TestFunctional/parallel/ImageCommands/Setup 1.98
133 TestFunctional/parallel/ImageCommands/ImageRemove 0.07
135 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.07
141 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 10.04
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.16
144 TestFunctional/delete_echo-server_images 0.07
145 TestFunctional/delete_my-image_image 0.02
146 TestFunctional/delete_minikube_cached_images 0.02
175 TestJSONOutput/start/Audit 0
177 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
181 TestJSONOutput/pause/Audit 0
183 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/unpause/Audit 0
189 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/stop/Command 1.85
193 TestJSONOutput/stop/Audit 0
195 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
197 TestErrorJSONOutput 0.2
202 TestMainNoArgs 0.03
249 TestStoppedBinaryUpgrade/Setup 0.87
261 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
265 TestNoKubernetes/serial/VerifyK8sNotRunning 0.05
266 TestNoKubernetes/serial/ProfileList 31.27
267 TestNoKubernetes/serial/Stop 3.1
269 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
281 TestStoppedBinaryUpgrade/MinikubeLogs 0.7
284 TestStartStop/group/old-k8s-version/serial/Stop 3.14
285 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.09
295 TestStartStop/group/no-preload/serial/Stop 3.3
296 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.11
308 TestStartStop/group/embed-certs/serial/Stop 3.24
309 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.12
313 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.62
314 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.12
326 TestStartStop/group/newest-cni/serial/DeployApp 0
327 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
328 TestStartStop/group/newest-cni/serial/Stop 3.04
329 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.12
331 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
332 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-403000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-403000: exit status 85 (96.28575ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-403000 | jenkins | v1.33.1 | 29 Jul 24 10:34 PDT |          |
	|         | -p download-only-403000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 10:34:58
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 10:34:58.122367    6545 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:34:58.122491    6545 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:34:58.122495    6545 out.go:304] Setting ErrFile to fd 2...
	I0729 10:34:58.122498    6545 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:34:58.122612    6545 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	W0729 10:34:58.122700    6545 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19339-6071/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19339-6071/.minikube/config/config.json: no such file or directory
	I0729 10:34:58.124059    6545 out.go:298] Setting JSON to true
	I0729 10:34:58.141785    6545 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3867,"bootTime":1722270631,"procs":454,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 10:34:58.141859    6545 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 10:34:58.147365    6545 out.go:97] [download-only-403000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 10:34:58.147490    6545 notify.go:220] Checking for updates...
	W0729 10:34:58.147508    6545 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball: no such file or directory
	I0729 10:34:58.151265    6545 out.go:169] MINIKUBE_LOCATION=19339
	I0729 10:34:58.154319    6545 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	I0729 10:34:58.156654    6545 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 10:34:58.160304    6545 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:34:58.176387    6545 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	W0729 10:34:58.182294    6545 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0729 10:34:58.182554    6545 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:34:58.186317    6545 out.go:97] Using the qemu2 driver based on user configuration
	I0729 10:34:58.186336    6545 start.go:297] selected driver: qemu2
	I0729 10:34:58.186351    6545 start.go:901] validating driver "qemu2" against <nil>
	I0729 10:34:58.186417    6545 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 10:34:58.189996    6545 out.go:169] Automatically selected the socket_vmnet network
	I0729 10:34:58.195753    6545 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0729 10:34:58.195853    6545 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 10:34:58.195917    6545 cni.go:84] Creating CNI manager for ""
	I0729 10:34:58.195941    6545 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0729 10:34:58.196004    6545 start.go:340] cluster config:
	{Name:download-only-403000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-403000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:34:58.200022    6545 iso.go:125] acquiring lock: {Name:mk2808e0b9510c77af2c0862d3450f3cc996acba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:34:58.204857    6545 out.go:97] Downloading VM boot image ...
	I0729 10:34:58.204878    6545 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso
	I0729 10:35:05.490823    6545 out.go:97] Starting "download-only-403000" primary control-plane node in "download-only-403000" cluster
	I0729 10:35:05.490849    6545 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 10:35:05.549352    6545 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0729 10:35:05.549359    6545 cache.go:56] Caching tarball of preloaded images
	I0729 10:35:05.550219    6545 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 10:35:05.556879    6545 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0729 10:35:05.556885    6545 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0729 10:35:05.640983    6545 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0729 10:35:12.342605    6545 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0729 10:35:12.342761    6545 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0729 10:35:13.038822    6545 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0729 10:35:13.039006    6545 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/download-only-403000/config.json ...
	I0729 10:35:13.039024    6545 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/download-only-403000/config.json: {Name:mkb9ca26ad1005982ac978eb61746b6b0a1304c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:35:13.039259    6545 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 10:35:13.039452    6545 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0729 10:35:13.623573    6545 out.go:169] 
	W0729 10:35:13.628553    6545 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19339-6071/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x109081a60 0x109081a60 0x109081a60 0x109081a60 0x109081a60 0x109081a60 0x109081a60] Decompressors:map[bz2:0x1400089b270 gz:0x1400089b278 tar:0x1400089b220 tar.bz2:0x1400089b230 tar.gz:0x1400089b240 tar.xz:0x1400089b250 tar.zst:0x1400089b260 tbz2:0x1400089b230 tgz:0x1400089b240 txz:0x1400089b250 tzst:0x1400089b260 xz:0x1400089b280 zip:0x1400089b290 zst:0x1400089b288] Getters:map[file:0x14000985970 http:0x140006dc640 https:0x140006dc690] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0729 10:35:13.628580    6545 out_reason.go:110] 
	W0729 10:35:13.636560    6545 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 10:35:13.640521    6545 out.go:169] 
	
	
	* The control-plane node download-only-403000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-403000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-403000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (10.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-310000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-310000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=qemu2 : (10.111258541s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (10.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
--- PASS: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-310000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-310000: exit status 85 (77.341542ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-403000 | jenkins | v1.33.1 | 29 Jul 24 10:34 PDT |                     |
	|         | -p download-only-403000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT | 29 Jul 24 10:35 PDT |
	| delete  | -p download-only-403000        | download-only-403000 | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT | 29 Jul 24 10:35 PDT |
	| start   | -o=json --download-only        | download-only-310000 | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT |                     |
	|         | -p download-only-310000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 10:35:14
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 10:35:14.055339    6569 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:35:14.055468    6569 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:35:14.055478    6569 out.go:304] Setting ErrFile to fd 2...
	I0729 10:35:14.055481    6569 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:35:14.055609    6569 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 10:35:14.056623    6569 out.go:298] Setting JSON to true
	I0729 10:35:14.073518    6569 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3883,"bootTime":1722270631,"procs":454,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 10:35:14.073577    6569 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 10:35:14.078556    6569 out.go:97] [download-only-310000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 10:35:14.078669    6569 notify.go:220] Checking for updates...
	I0729 10:35:14.079943    6569 out.go:169] MINIKUBE_LOCATION=19339
	I0729 10:35:14.082468    6569 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	I0729 10:35:14.086483    6569 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 10:35:14.087911    6569 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:35:14.090501    6569 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	W0729 10:35:14.096533    6569 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0729 10:35:14.096709    6569 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:35:14.099459    6569 out.go:97] Using the qemu2 driver based on user configuration
	I0729 10:35:14.099470    6569 start.go:297] selected driver: qemu2
	I0729 10:35:14.099474    6569 start.go:901] validating driver "qemu2" against <nil>
	I0729 10:35:14.099545    6569 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 10:35:14.102443    6569 out.go:169] Automatically selected the socket_vmnet network
	I0729 10:35:14.108508    6569 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0729 10:35:14.108590    6569 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 10:35:14.108607    6569 cni.go:84] Creating CNI manager for ""
	I0729 10:35:14.108616    6569 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 10:35:14.108622    6569 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 10:35:14.108666    6569 start.go:340] cluster config:
	{Name:download-only-310000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-310000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:35:14.112219    6569 iso.go:125] acquiring lock: {Name:mk2808e0b9510c77af2c0862d3450f3cc996acba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:35:14.116475    6569 out.go:97] Starting "download-only-310000" primary control-plane node in "download-only-310000" cluster
	I0729 10:35:14.116486    6569 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 10:35:14.172275    6569 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 10:35:14.172286    6569 cache.go:56] Caching tarball of preloaded images
	I0729 10:35:14.172454    6569 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 10:35:14.177550    6569 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0729 10:35:14.177557    6569 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	I0729 10:35:14.255821    6569 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4?checksum=md5:5a76dba1959f6b6fc5e29e1e172ab9ca -> /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 10:35:19.380379    6569 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	I0729 10:35:19.380542    6569 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	I0729 10:35:19.922864    6569 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 10:35:19.923064    6569 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/download-only-310000/config.json ...
	I0729 10:35:19.923080    6569 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/download-only-310000/config.json: {Name:mk18aac0808aa96b23e201aaf7548b621eae1b96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:35:19.923319    6569 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 10:35:19.924164    6569 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/darwin/arm64/v1.30.3/kubectl
	
	
	* The control-plane node download-only-310000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-310000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-310000
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (10.33s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-732000 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-732000 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=qemu2 : (10.325843792s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (10.33s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-732000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-732000: exit status 85 (76.72925ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-403000 | jenkins | v1.33.1 | 29 Jul 24 10:34 PDT |                     |
	|         | -p download-only-403000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=qemu2                      |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT | 29 Jul 24 10:35 PDT |
	| delete  | -p download-only-403000             | download-only-403000 | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT | 29 Jul 24 10:35 PDT |
	| start   | -o=json --download-only             | download-only-310000 | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT |                     |
	|         | -p download-only-310000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=qemu2                      |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT | 29 Jul 24 10:35 PDT |
	| delete  | -p download-only-310000             | download-only-310000 | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT | 29 Jul 24 10:35 PDT |
	| start   | -o=json --download-only             | download-only-732000 | jenkins | v1.33.1 | 29 Jul 24 10:35 PDT |                     |
	|         | -p download-only-732000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=qemu2                      |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 10:35:24
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 10:35:24.450407    6594 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:35:24.450600    6594 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:35:24.450603    6594 out.go:304] Setting ErrFile to fd 2...
	I0729 10:35:24.450605    6594 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:35:24.450741    6594 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 10:35:24.451768    6594 out.go:298] Setting JSON to true
	I0729 10:35:24.467622    6594 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3893,"bootTime":1722270631,"procs":456,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 10:35:24.467731    6594 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 10:35:24.472800    6594 out.go:97] [download-only-732000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 10:35:24.472906    6594 notify.go:220] Checking for updates...
	I0729 10:35:24.475590    6594 out.go:169] MINIKUBE_LOCATION=19339
	I0729 10:35:24.479730    6594 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	I0729 10:35:24.483740    6594 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 10:35:24.486758    6594 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:35:24.489745    6594 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	W0729 10:35:24.495731    6594 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0729 10:35:24.495920    6594 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:35:24.498645    6594 out.go:97] Using the qemu2 driver based on user configuration
	I0729 10:35:24.498653    6594 start.go:297] selected driver: qemu2
	I0729 10:35:24.498658    6594 start.go:901] validating driver "qemu2" against <nil>
	I0729 10:35:24.498708    6594 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 10:35:24.501756    6594 out.go:169] Automatically selected the socket_vmnet network
	I0729 10:35:24.505230    6594 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0729 10:35:24.505316    6594 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 10:35:24.505336    6594 cni.go:84] Creating CNI manager for ""
	I0729 10:35:24.505344    6594 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 10:35:24.505349    6594 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 10:35:24.505387    6594 start.go:340] cluster config:
	{Name:download-only-732000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:download-only-732000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet St
aticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:35:24.508599    6594 iso.go:125] acquiring lock: {Name:mk2808e0b9510c77af2c0862d3450f3cc996acba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:35:24.511736    6594 out.go:97] Starting "download-only-732000" primary control-plane node in "download-only-732000" cluster
	I0729 10:35:24.511745    6594 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0729 10:35:24.578571    6594 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0729 10:35:24.578599    6594 cache.go:56] Caching tarball of preloaded images
	I0729 10:35:24.578771    6594 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0729 10:35:24.584021    6594 out.go:97] Downloading Kubernetes v1.31.0-beta.0 preload ...
	I0729 10:35:24.584031    6594 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 ...
	I0729 10:35:24.664381    6594 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4?checksum=md5:5025ece13368183bde5a7f01207f4bc3 -> /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0729 10:35:29.712964    6594 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 ...
	I0729 10:35:29.713347    6594 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 ...
	I0729 10:35:30.232179    6594 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0729 10:35:30.232379    6594 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/download-only-732000/config.json ...
	I0729 10:35:30.232395    6594 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19339-6071/.minikube/profiles/download-only-732000/config.json: {Name:mk1e47e8b777ec488bd2f75dce2fe47ca3273275 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:35:30.232647    6594 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0729 10:35:30.232768    6594 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0-beta.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0-beta.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19339-6071/.minikube/cache/darwin/arm64/v1.31.0-beta.0/kubectl
	
	
	* The control-plane node download-only-732000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-732000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-732000
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestBinaryMirror (0.28s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-184000 --alsologtostderr --binary-mirror http://127.0.0.1:51031 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-184000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-184000
--- PASS: TestBinaryMirror (0.28s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-166000
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-166000: exit status 85 (59.34325ms)

                                                
                                                
-- stdout --
	* Profile "addons-166000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-166000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-166000
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-166000: exit status 85 (55.525958ms)

                                                
                                                
-- stdout --
	* Profile "addons-166000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-166000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.24s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (10.24s)

                                                
                                    
x
+
TestErrorSpam/start (0.38s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-634000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-634000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-634000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-634000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-634000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-634000 start --dry-run
--- PASS: TestErrorSpam/start (0.38s)

                                                
                                    
x
+
TestErrorSpam/status (0.09s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-634000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-634000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-634000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-634000 status: exit status 7 (30.06175ms)

                                                
                                                
-- stdout --
	nospam-634000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-634000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-634000 status" failed: exit status 7
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-634000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-634000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-634000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-634000 status: exit status 7 (29.078333ms)

                                                
                                                
-- stdout --
	nospam-634000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-634000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-634000 status" failed: exit status 7
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-634000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-634000 status
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-634000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-634000 status: exit status 7 (29.128083ms)

                                                
                                                
-- stdout --
	nospam-634000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-634000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-634000 status" failed: exit status 7
--- PASS: TestErrorSpam/status (0.09s)

                                                
                                    
x
+
TestErrorSpam/pause (0.12s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-634000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-634000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-634000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-634000 pause: exit status 83 (40.85675ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-634000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-634000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-634000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-634000 pause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-634000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-634000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-634000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-634000 pause: exit status 83 (38.078125ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-634000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-634000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-634000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-634000 pause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-634000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-634000 pause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-634000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-634000 pause: exit status 83 (39.702833ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-634000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-634000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-634000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-634000 pause" failed: exit status 83
--- PASS: TestErrorSpam/pause (0.12s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.12s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-634000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-634000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-634000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-634000 unpause: exit status 83 (39.7155ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-634000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-634000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-634000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-634000 unpause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-634000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-634000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-634000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-634000 unpause: exit status 83 (38.489916ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-634000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-634000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-634000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-634000 unpause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-634000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-634000 unpause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-634000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-634000 unpause: exit status 83 (39.222709ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-634000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-634000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-634000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-634000 unpause" failed: exit status 83
--- PASS: TestErrorSpam/unpause (0.12s)

                                                
                                    
x
+
TestErrorSpam/stop (10.21s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-634000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-634000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-634000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-634000 stop: (2.911897334s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-634000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-634000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-634000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-634000 stop: (3.488794s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-634000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-634000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-634000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-634000 stop: (3.8039295s)
--- PASS: TestErrorSpam/stop (10.21s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/19339-6071/.minikube/files/etc/test/nested/copy/6543/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (1.78s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (1.78s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-863000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local4077748536/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 cache add minikube-local-cache-test:functional-863000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 cache delete minikube-local-cache-test:functional-863000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-863000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-863000 config get cpus: exit status 14 (30.538208ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-863000 config get cpus: exit status 14 (40.387958ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-863000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-863000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (118.790916ms)

                                                
                                                
-- stdout --
	* [functional-863000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19339
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:37:11.102364    7065 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:37:11.102493    7065 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:37:11.102496    7065 out.go:304] Setting ErrFile to fd 2...
	I0729 10:37:11.102499    7065 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:37:11.102637    7065 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 10:37:11.103575    7065 out.go:298] Setting JSON to false
	I0729 10:37:11.119596    7065 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4000,"bootTime":1722270631,"procs":448,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 10:37:11.119661    7065 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 10:37:11.124936    7065 out.go:177] * [functional-863000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 10:37:11.134941    7065 out.go:177]   - MINIKUBE_LOCATION=19339
	I0729 10:37:11.134981    7065 notify.go:220] Checking for updates...
	I0729 10:37:11.142937    7065 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	I0729 10:37:11.146960    7065 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 10:37:11.149917    7065 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:37:11.152951    7065 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	I0729 10:37:11.155947    7065 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 10:37:11.159225    7065 config.go:182] Loaded profile config "functional-863000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:37:11.159489    7065 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:37:11.163950    7065 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 10:37:11.170889    7065 start.go:297] selected driver: qemu2
	I0729 10:37:11.170896    7065 start.go:901] validating driver "qemu2" against &{Name:functional-863000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-863000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:37:11.170945    7065 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:37:11.176990    7065 out.go:177] 
	W0729 10:37:11.180906    7065 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0729 10:37:11.183988    7065 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-863000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-863000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-863000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (109.608166ms)

                                                
                                                
-- stdout --
	* [functional-863000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19339
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:37:10.986662    7061 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:37:10.986793    7061 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:37:10.986798    7061 out.go:304] Setting ErrFile to fd 2...
	I0729 10:37:10.986801    7061 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:37:10.986934    7061 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19339-6071/.minikube/bin
	I0729 10:37:10.988319    7061 out.go:298] Setting JSON to false
	I0729 10:37:11.005117    7061 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3999,"bootTime":1722270631,"procs":448,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 10:37:11.005203    7061 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 10:37:11.011099    7061 out.go:177] * [functional-863000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	I0729 10:37:11.018983    7061 out.go:177]   - MINIKUBE_LOCATION=19339
	I0729 10:37:11.019066    7061 notify.go:220] Checking for updates...
	I0729 10:37:11.024932    7061 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	I0729 10:37:11.027899    7061 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 10:37:11.030999    7061 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:37:11.033988    7061 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	I0729 10:37:11.036950    7061 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 10:37:11.040263    7061 config.go:182] Loaded profile config "functional-863000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:37:11.040526    7061 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:37:11.044922    7061 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0729 10:37:11.051965    7061 start.go:297] selected driver: qemu2
	I0729 10:37:11.051972    7061 start.go:901] validating driver "qemu2" against &{Name:functional-863000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-863000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:37:11.052022    7061 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:37:11.057899    7061 out.go:177] 
	W0729 10:37:11.061883    7061 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0729 10:37:11.064907    7061 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-863000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1311: Took "46.558208ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1325: Took "32.797084ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1362: Took "46.958125ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1375: Took "33.849375ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:341: (dbg) Done: docker pull docker.io/kicbase/echo-server:1.0: (1.950549375s)
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-863000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 image rm docker.io/kicbase/echo-server:functional-863000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-863000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 image save --daemon docker.io/kicbase/echo-server:functional-863000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-863000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:351: (dbg) Done: dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.: (10.011548542s)
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-863000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-863000
--- PASS: TestFunctional/delete_echo-server_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-863000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-863000
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (1.85s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-657000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-657000 --output=json --user=testUser: (1.853684709s)
--- PASS: TestJSONOutput/stop/Command (1.85s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-160000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-160000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (96.990958ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"98407643-fc23-4def-baad-ffc4296e0cef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-160000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"763970c6-3f54-40e4-95f5-8a62fd84e81e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19339"}}
	{"specversion":"1.0","id":"b5c487fa-a704-49c5-9d06-a99c2cbab3c4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig"}}
	{"specversion":"1.0","id":"d91ff059-9730-4416-ba6b-3058d9760d6f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"987215a0-6203-4999-a445-d834d784c7b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"8d46a22f-f806-4854-845b-78f4ad1fe406","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube"}}
	{"specversion":"1.0","id":"1b6596af-0dc2-42dc-b16e-e383d329b9a3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"ee04f5aa-fb72-45ce-b137-4f2ffef27f41","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-160000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-160000
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.87s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-558000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-558000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (102.686958ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-558000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19339
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19339-6071/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19339-6071/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-558000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-558000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (45.072125ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-558000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-558000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.59821825s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.674989166s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-558000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-558000: (3.098466917s)
--- PASS: TestNoKubernetes/serial/Stop (3.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-558000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-558000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (41.246083ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-558000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-558000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.7s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-294000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.70s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-178000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-178000 --alsologtostderr -v=3: (3.141821834s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-178000 -n old-k8s-version-178000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-178000 -n old-k8s-version-178000: exit status 7 (31.734291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-178000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-878000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-878000 --alsologtostderr -v=3: (3.294989292s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-878000 -n no-preload-878000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-878000 -n no-preload-878000: exit status 7 (46.486333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-878000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-613000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-613000 --alsologtostderr -v=3: (3.237802083s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-613000 -n embed-certs-613000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-613000 -n embed-certs-613000: exit status 7 (56.959542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-613000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.62s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-630000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-630000 --alsologtostderr -v=3: (3.623145458s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.62s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-630000 -n default-k8s-diff-port-630000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-630000 -n default-k8s-diff-port-630000: exit status 7 (54.752917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-630000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-377000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.04s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-377000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-377000 --alsologtostderr -v=3: (3.03511375s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.04s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-377000 -n newest-cni-377000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-377000 -n newest-cni-377000: exit status 7 (55.382083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-377000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (24/266)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (10.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-863000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port338970188/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1722274594108871000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port338970188/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1722274594108871000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port338970188/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1722274594108871000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port338970188/001/test-1722274594108871000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-863000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (56.272625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-863000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-863000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-863000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (84.444833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-863000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-863000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-863000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.870625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-863000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-863000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-863000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (84.470792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-863000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-863000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-863000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (83.285417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-863000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-863000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-863000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.538875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-863000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-863000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-863000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (94.428458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-863000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-863000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:123: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-863000 ssh "sudo umount -f /mount-9p": exit status 83 (43.608042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-863000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-863000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:92: "out/minikube-darwin-arm64 -p functional-863000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-863000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port338970188/001:/mount-9p --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/any-port (10.63s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (13.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-863000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port973486424/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-863000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (63.330292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-863000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-863000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-863000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.208917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-863000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-863000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-863000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.286792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-863000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-863000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-863000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.012208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-863000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-863000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-863000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (84.530417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-863000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-863000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-863000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (83.023709ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-863000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-863000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-863000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (83.992167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-863000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-863000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:251: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-863000 ssh "sudo umount -f /mount-9p": exit status 83 (43.118958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-863000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-863000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-863000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-863000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port973486424/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (13.04s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (13.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-863000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1500588783/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-863000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1500588783/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-863000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1500588783/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-863000 ssh "findmnt -T" /mount1: exit status 83 (71.503292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-863000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-863000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-863000 ssh "findmnt -T" /mount1: exit status 83 (84.679541ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-863000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-863000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-863000 ssh "findmnt -T" /mount1: exit status 83 (85.725625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-863000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-863000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-863000 ssh "findmnt -T" /mount1: exit status 83 (86.153583ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-863000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-863000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-863000 ssh "findmnt -T" /mount1: exit status 83 (84.764208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-863000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-863000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-863000 ssh "findmnt -T" /mount1: exit status 83 (81.986541ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-863000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-863000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-863000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-863000 ssh "findmnt -T" /mount1: exit status 83 (86.875209ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-863000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-863000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-863000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1500588783/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-863000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1500588783/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-863000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1500588783/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (13.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-281000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-281000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-281000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-281000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-281000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-281000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-281000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-281000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-281000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-281000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-281000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-281000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-281000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-281000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-281000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-281000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-281000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-281000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-281000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-281000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-281000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-281000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-281000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-281000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-281000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-281000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-281000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-281000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-281000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-281000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-281000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-281000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-281000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-281000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-281000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-281000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-281000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-281000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-281000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-281000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-281000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-281000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-281000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-281000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-281000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-281000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-281000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-281000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-281000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-281000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-281000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-281000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-281000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-281000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-281000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-281000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-281000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-281000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-281000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-281000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-281000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-281000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-281000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-281000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-281000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-281000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-281000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-281000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-281000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-281000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-281000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-281000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-281000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-281000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-281000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-281000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-281000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-281000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-281000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-281000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-281000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-281000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-281000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-281000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-281000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-281000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-281000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-281000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-281000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-281000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-281000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-281000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-281000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-281000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-281000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-281000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-281000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-281000"

                                                
                                                
----------------------- debugLogs end: cilium-281000 [took: 2.18177625s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-281000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-281000
--- SKIP: TestNetworkPlugins/group/cilium (2.29s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-758000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-758000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                    
Copied to clipboard